Leaderboard

Popular Content

Showing content with the highest reputation on 10/22/21 in all areas

  1. No crypto currency? That one will be a global payment system.
    4 points
  2. You call it "patch", but it's not a patch! If you are worried about reverting changes after following the tutorial to enable tpm and use ovmf compatible secure boot uefi bios, why don't you install win11 with the registry hacks to bypass them? A friend of mine told me he was able to receive patch tuesday update too. Once unraid will be upgraded you can add tpm and change ovmf uefi bios type.
    3 points
  3. PSA For the next day or so (until I get the time), the New Apps section ( and on show more for 7 additional pages ) will show nothing but Linux Server applications. This is due to a re-organization at Linuxserver (for the better) but does require me to make some manual adjustments to the data files that the application feed utilizes in order to get the system back to "normal" again. There is nothing wrong with either the application feed or CA.
    2 points
  4. I can only speak for myself and I started looking at the beginning of October into it how swTPM is working and also support BitLocker without recovering the drive every time you reboot the host (unRAID), BitLocker is with the current method possible but you have to recover the driver every time you reboot the host. The other way also involves to create a user script that maybe break your Dockers <- this is a thing that I can't confirm but if you read back in the thread you will see that some users reported that some Docker containers are broken on reboot with the other way. Please also keep in mind a template needs to be created for Windows 11 and a more or less easy way of upgrading or changing the BIOS type from the VM to the new TPM type is also needed, this also involves writing tutorials on how to do this step by step and so on... Keep in mind this is all time consuming and needs to be tested so that everything is working correctly and not breaking anything. But the requirement for TPM and that secure boot is available is. Keep in mind this is all from my perspective as a community developer.
    2 points
  5. Yeah, was looking for what ^ said, "none of the above," consider me a control group. Really I only care for credit/debit card, I see extra options as unnecessary.
    2 points
  6. I'll contact Steef about it to make sure he's planning to update the core image. If not, I'll roll my own and update here. I'll report back here once I know though. Edit: the necessary libraries are already installed in this image, so no changes are necessary. This has already been tested by someone running the beta update. Please see this Github issue for more info. Cheers!
    2 points
  7. Per @Jaster's suggestion, how many individual servers are you running?
    1 point
  8. Hi Everyone, I would like to get the docker container of Webtrees (http://webtrees.net/) which is an opensource geneology application to work with unraid. I think this would be a great app to have. Anyone willing to help? https://github.com/H2CK/webtrees Cheers!
    1 point
  9. Funny you mention that, because I was just thinking about this "issue" earlier this week... of course an extra backup isn't really an "issue", just wasted space. I've slowly moved almost all of the content I care about off the computers on the network... they almost exclusively access data stored on the Unraid NAS or via Nextcloud, which makes this issue largely go away. The one exception is Adobe Lightroom... that catalog really wants to be on a local disk. I just set up Borgmatic in a Docker container under Windows to backup the Lightroom and User folders from my Windows PC to an Unraid share that is NOT backed up by Borg (pretty easy and works great). If I include that share in my Unraid backup, I'll have way more copies of the data than I really need. Don't need an incremental backup of an incremental backup. I think I'm moving away from the "funnel" mentality. Instead I'll have Unraid/Borg create local and remote repos for its important data. I'll have Windows/Borg create local and remote repos for it's important data. The Windows "local" repo will be kept in an un-backed up Unraid user share or the unassigned drive I use for backup. Going forward, I'll probably do the same on any and all other PC's that have data I care about. The reality is that all of my other family members work almost exclusively out of the cloud (or my Unraid cloud), so there's very little risk of data loss.
    1 point
  10. Zenstates ist auf Ryzen-Prozessoren ausgelegt - daher wohl die irreführende Rückmeldung
    1 point
  11. No changes in opencore, you can change the number of cores, only don't choose an odd number. I suggest to copy and paste somewhere as a backup your current xml, so if you mess with something you can paste it back and start again.
    1 point
  12. Check the link above if you want to try in the windows vm. In general yes, without any patch the only method to initialize the gpu is the restart or shutdown of the host As far as I know every amd gpu prior to the 6000 series, but a 6000 series gpu requires that you sell parts of your body ...prices are crazy Nvidia gpus don't suffer the reset bug, if you are not running mac os vm (kepler gpus are supported till big sur, monterey will not support them) I would go for nvidia. I'm not quite expert in this reset bug, as I only read arounf the issues, I don't own any amd gpu.
    1 point
  13. You are missing the custom qemu args at the bottom of the xml, copy back them (highlighted in yellow): https://github.com/SpaceinvaderOne/Macinabox/blob/master/xml/Macinabox BigSur.xml#L134-L144 no oskey...no party!! Remove these line! <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon>
    1 point
  14. I can reproduce this and I have reported the issue to Limetech.
    1 point
  15. There are several back traces and all mention corefreq, e.g.: Oct 19 05:40:52 MYSERVER kernel: Sys_MemInfo+0x20/0x9b [corefreqk]
    1 point
  16. That's not cable related, a SMART attribute is failing now, so yes, it should be replaced.
    1 point
  17. Habe recht viele Sticks vermessen. Habe grad keine Zeoit das alles "aufzubereiten", aber ich mal der sparsamste von den aktuell noch nachkaufbaren: Philips FM 16 FD 05 B/00 USB Stick White/Blue 16 GB MLC Nand Flash Memory USB 2.0 https://www.amazon.de/-/de/gp/product/B001I91SV2 aktuell 6,5€ 0,159W idle 0,246W Last/Benchmark gemessen mit ZHITING UM34C USB Tester Amazon Der Name klingt erstmal komisch, aber der wurde bei div. Vergleichstest als sehr genau gemessen. Habe erstmal nur direkt abgelesen, aber Bluetooth werd ich demnächst auch mal testen. Dann lassen sich verläufe besser darstellen. Zum Vergleich: mein alter Lexar 4GB USB 2 verbraucht im idle soviel, wie dieser Stick unter Last. Bwi INetresse stelle ich mal alle Messungen in einem neuen Thema vor. Stick Rückseite Stick Vorderseite Benchmark Ubuntu
    1 point
  18. Try uninstalling the corefreq plugin.
    1 point
  19. Unfortunately yes. Attributes 1 and 200 should always be at 0 for Western Digital drives. You have very high values. And the last extended tests failed with a failure to read.
    1 point
  20. the problem is already evident in your comment, you say crypto currency because you dont want to imply using a certain one. Problem is there are a bazillion crypto currencies now and whenever you start accepting one people will spam around why Doge coin is accepted by Shiba Inu coin is not Then you also need a service in between that updates the pricing as crypto currencies are far from stable so you usually end up with a payment provider which defeats the purpose of using crypto in the first place. Then you also need to convert that crypto back to FIAT currency as crypto usually isnt accepted to pay the bills so they need a tax guy to handle this stuff on top of the usual stuff. Same for customers, depending on where you are buying something with your crypto could be a realized gain which is then taxed so you need documentation to track all this for tax season. Overall its just one giant effort for a relatively small group of people who if they would practice what they preach not use cryptos anyways because they will HODL forever.
    1 point
  21. I think the 55GB is the file system structure. Seems on par with what I see for XFS relative to your disk size. It is possible that NTFS has less overhead.
    1 point
  22. Because this is not a security issue but a feature addition. What is mandatory for you (running win11) should not be mandatory for others. For me running mac os is mandatory, so should I ask to unraid to include a patch with the opencore bootloader?I think no... Moreover if you want to run win11 with 6.9.2, you can.
    1 point
  23. Thanks a ton! That will be exactly what I was looking for.
    1 point
  24. ...the respective external IPs you can slways ping, of course. I was referring to the internal IPs here, local/dorm, home and ovpn/transition. You should be able to ping the gateway IP of each remote interface when using the local gateway-IP as source. That is the the first, basic think you need to be able to establish. Then start using other clients in the respect network to ping remote IPs, gateways and other clients..to test routes. Yes, in your home cobfig, without a ofsense, networks and routing/routes pushed/published need to be configured in the ovpn-server config. ...it's hard to do a quote of your message via Tapatalk here, on my small phone, sorry. You need to get your head around the way how interfaces, gateways and routing works across IP networks and from the different perspectives of a client and a gateway. This is independent of the tools/parts used ... when you know how your logic setup works, transfer the concept to the individual setup of pfsense and ovpn client and server. Maybe start by drawing a diagram, with interfaces, IPs, routes...then walk yourself through what paths a packet will take and what routes and gateways apply when doing a ping fram A to B. Remember that for a IP connection, even that for a single packet like a ping, you need a path from source-IP to destination-IP *and* a return path back from destination to source. On the destination site. This applies to all networks involved on the paths. Gesendet von meinem SM-G780G mit Tapatalk
    1 point
  25. Danke für euren Input:
    1 point
  26. I've also noticed that the default recording format has changed so if you have explicit output_args defined you may want to review them (changed from mp4 to ts) Audio is now recording by default - the entire reason I had a output_args ffmpeg override anyway. RRB
    1 point
  27. For those who updated to V0.9.2 from 0.9.1 and found their config.yml file was no longer valid.... In V0.9.1, I didn't have RTMP included in my config.yml and I would get an error "Camera (camera name) has rtmp enabled, but rtmp is not assigned to an input." but it would still work fine. With the change to V0.9.2, it no longer works - lists it as a configuration error and shuts down the docker. Under each camera section, I added the lines below that seemed to fix it. I no longer get the error in my logs and frigate is working fine again. "rtmp:" should line up with your "ffmpeg", "detect", "record" etc. You'll need the below lines for every camera. rtmp: enabled: false Here is the error message I was getting in my logs - again - solved with the above yml code. Camera deck has rtmp enabled, but rtmp is not assigned to an input. (type=value_error) ************************************************************* *** End Config Validation Errors *** ************************************************************* [cmd] python3 exited 1 [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.
    1 point
  28. Really glad that descriptions on cards are back, but boy do I not like the heavily reduced amount of items and being forced to look at overblown large cards. I take it the layout makes sense on touch devices, especially single-row views, but on desktop? Quite jarring to be frank. Too much scrolling, clicking "more", etc... Honestly? Old view is king in my books. The new side-loading panel for the details view is killer though, really appreciate that! Overall good work for sure, I love the amount of work that goes into this for sure, but desktop usage is (partially) nerfed.
    1 point
  29. I would really like to see the new/trending/top new installs links back on the sidebar. I use them for discovery a lot of the time, and right now you have to Show More, do your browsing, click on the apps link again, then Show More on the next category. That's my only real complaint with the redesign, I'm happy to have the option to toggle descriptions as well. Thanks for all the work, Squid!
    1 point
  30. Just gonna say it... I'm not a fan of the new CA GUI look and layout. I don't like that you have to click an "Info" button now on every app to see a description of the app. 👎
    1 point
  31. I know this isn’t one of the poll options, but I would LOVE to see enhancements to the docker interface. A gui able to use docker-compose and stacks. Similar to portainer but not exactly. A lot of things you find online all use compose and having that option in the gui I think would only make the user experience that much better.
    1 point
  32. I am a reletively new unRaid user with a small server. Originally unRaid was a hard sell on me for the sole reason it didn't have zfs. I love how user friendly it is and applications like Nextcloud and Plex were simply a few clicks. I am also a heavy Proxmox user and due to the how unRaid's parity works I got fed up and was about to leave unRaid. If ZFS is added it would be the best thing I would hands down become a unRaid power user too. There are a lot of things I just prefer about Proxmox which is not a discussion for here but I definitely won't be biased and will be running a hybrid setup thanks to zfs
    1 point
  33. Was auch gehen sollte: Die Datei "nextcloud.cnf" im Verzeichnis "/mnt/user/appdata/mariadb-official/config" mit dem folgenden Inhalt erstellen: [mysqld] innodb_read_only_compressed = "OFF" Nach einem Neustart des MariaDB Official Containers, sollte dann alles wie gehabt funktionieren. Hier gefunden. Über das Webterminal könnte man es entsprechend so umsetzen: echo "[mysqld]" > /mnt/user/appdata/mariadb-official/config/nextcloud.cnf echo "innodb_read_only_compressed = \"OFF\"" >> /mnt/user/appdata/mariadb-official/config/nextcloud.cnf
    1 point
  34. The reason it isn't on this list for this poll is for reasons that might not be so obvious. As it stands today, there are really 3 ways to do snapshots on Unraid today (maybe more ;-). One is using btrfs snapshots at the filesystem layer. Another is using simple reflink copies which still relies upon btrfs. Another still is using the tools built into QEMU to do this. Each method has pros and cons. The qemu method is universal as it works on every filesystem we support because it isn't filesystem dependent. Unfortunately it also performs incredibly slow. Btrfs snapshots are really great, but you have to first define subvolumes to use them. It also relies on the fact that the underlying storage is formatted with btrfs. Reflink copies are really easy because they are essentially a smart copy command (just add --reflink to the end of any cp command). Still requires the source/destination to be on btrfs, but it's super fast, storage efficient, and doesn't even require you to have subvolumes defined to make use of it. And with the potential for ZFS, we have yet another option as it too supports snapshots! There are other challenges with snapshots as well, so it's a tougher nut to crack than some other features. Doesn't mean it's not on the roadmap
    1 point
  35. I was also getting the error Which I thought odd as I've never setup youtube-dll. In the end I renamed youtube-dl.subfolder.conf to youtube-dl.subfolder.conf_BAK, restarted Swag and everything is back up and running normally
    1 point
  36. Just had the same problem after updating 4 containers this morning. What ended up working for me was to delete the generic 1k icons from /var/lib/docker/unraid/images then do "force update". Thanks for everyone's help. Those questions mark icons were driving me crazy! EDIT: I am also on 6.9.2
    1 point
  37. I put in "docker" in version since the instructions mentioned that. Quite unclear indeed. Then used latest and now I find out because of you that plexpass is the right one to use. Anyways I thought HDR tonemapping HW transcoding was working. But when I check some files in my library then some movies work perfectly but others are still showing artifacts. Dashboard is showing that transcoding is being done by HW and can't find any difference. Maybe because I'm on the latest Intel gen that it's not fully supported yet. But is your whole 4K library being played with tonemapping without artifacts?
    1 point
  38. Local SSL has to be setup and working before you can enable remote access. Local SSL defaults to using port 443, but you can change that at Settings -> Management Access -> HTTPS port. If you want to use 443 for NPM, then choose something like 2443, 3443 or 4443 for local SSL (just make sure it is not already in use by another docker) Once Local SSL is working, so to Settings -> Management Access -> Unraid.net and set the WAN port to something else, like 5443. Then setup a port forward on your router to point external port 5443 to whatever port you configured local SSL access to.
    1 point
  39. I think the docker image on /mnt/cache that's mounted on /dev/loop2 is preventing the unmount. I killed a zombie container process accessing /dev/loop2, but still cannot detach /dev/loop2 and still stuck trying to unmount. Tried everything here: https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof /dev/loop2 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME container 15050 root 4u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 7u FIFO 0,82 0t0 2917 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stdout.log container 15050 root 8u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log container 15050 root 9u FIFO 0,82 0t0 2918 /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8ea313440eef7c42a99526240f16a5438cf23beb769630a6ede14276aebe8ca5/shim.stderr.log root@Tower:/# kill 15050 root@Tower:/# lsof /dev/loop2 root@Tower:/# losetup -d /dev/loop2 # fails silently root@Tower:/# echo $? 0 root@Tower:/# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# kill -9 12310 # not sure what this is, but killing it fails root@Tower:/# lsof | grep loop2 loop2 12310 root cwd DIR 0,2 440 2 / loop2 12310 root rtd DIR 0,2 440 2 / loop2 12310 root txt unknown /proc/12310/exe root@Tower:/# modprobe -r loop && modprobe loop # try to reload the module, but it's builtin modprobe: FATAL: Module loop is builtin.
    1 point
  40. There is an issue with /usr/share/terminfo/x/xterm-256color that's preventing it from being used (not sure what exactly, I didn't dig too far). I fixed it by using infocmp to dump the terminfo from my mac. I put the xterm-256color.terminfo file that resulted (see here for my copy, YMMV) into /boot/config/ and added the following to my go script: # fix xterm-256color terminfo tic /boot/config/xterm-256color.terminfo
    1 point