Leaderboard

Popular Content

Showing content with the highest reputation on 01/12/20 in all areas

  1. You've obviously got some ideas, why not do it? Problem is I see time and time again, is people keep telling us what we should be doing and how quick we should be doing it, now, don't be offended because this is a general observation, rather than personal. It's ten to one in the morning, I've just got back from work, I have a toddler that is going to get up in about five hours, my wife is heavily pregnant, Unraid Nvidia and beta testing just isn't up there in my list of priorities at this point. I've already looked at it and I need to look at compiling the newly added WireGuard out of tree driver. I will get around to it, but when I can. And if that means some Unraid users have to stick on v6.8.0 for a week or two then so be it, or, alternatively, forfeit GPU transcoding for a week or two, then so be it. I've tried every way I could when I was developing this to avoid completely repacking Unraid, I really did, nobody wanted to do that less than me. But, if we didn't do it this way, then we just saw loads of seg faults. I get a bit annoyed by criticism of turnaround time, because, as this forum approaches 100,000 users, how many actually give anything back? And of all the people who tell us how we should be quicker, how many step up and do it themselves? TL:DR It'll be ready when it's ready, not a moment sooner, and if my wife goes into labour, well, probably going to get delayed. My life priority order: 1. Wife/kids 2. Family 3. Work (Pays the mortgage and puts food on the table) @Marshalleq The one big criticism I have is comparing this to ZFS plugin, no disrespect, that's like comparing apples to oranges. Until you understand, and my last lengthy post on this thread might give you some insight. Please refrain from complaining. ZFS installs a package at boot, we replace every single file that makes up Unraid other than bzroot-gui. I've said it before, I'll say it again. WE ARE VOLUNTEERS Want enterprise level turnaround times, pay my wages.
    8 points
  2. This is a bug fix and security update release. Due to a security vulnerability discovered in forms-based authentication: ALL USERS ARE STRONGLY ENCOURAGED TO UPGRADE To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Version 6.8.1 2020-01-10 Changes vs. 6.8.0 Base distro: libuv: version 1.34.0 libvirt: version 5.10.0 mozilla-firefox: version 72.0.1 (CVE-2019-17026, CVE-2019-17015, CVE-2019-17016, CVE-2019-17017, CVE-2019-17018, CVE-2019-17019, CVE-2019-17020, CVE-2019-17021, CVE-2019-17022, CVE-2019-17023, CVE-2019-17024, CVE-2019-17025) php: version 7.3.13 (CVE-2019-11044 CVE-2019-11045 CVE-2019-11046 CVE-2019-11047 CVE-2019-11049 CVE-2019-11050) qemu: version 4.2.0 samba: version 4.11.4 ttyd: version 20200102 wireguard-tools: version 1.0.20200102 Linux kernel: version 4.19.94 kernel_firmware: version 20191218_c4586ff (with additional Intel BT firmware) CONFIG_THUNDERBOLT: Thunderbolt support CONFIG_INTEL_WMI_THUNDERBOLT: Intel WMI thunderbolt force power driver CONFIG_THUNDERBOLT_NET: Networking over Thunderbolt cable oot: Highpoint rr3740a: version v1.19.0_19_04_04 oot: Highpoint r750: version v1.2.11-18_06_26 [restored] oot: wireguard: version 0.0.20200105 Management: add cache-busting params for noVNC url assets emhttpd: fix cryptsetup passphrase input network: disable IPv6 for an interface when its settings is "IPv4 only". webgui: Management page: fixed typos in help text webgui: VM settings: fixed Apply button sometimes not working webgui: Dashboard: display CPU load full width when no HT webgui: Docker: show 'up-to-date' when status is unknown webgui: Fixed: handle race condition when updating share access rights in Edit User webgui: Docker: allow to set container port for custom bridge networks webgui: Better support for custom themes (not perfect yet) webgui: Dashboard: adjusted table positioning webgui: Add user name and user description verification webgui: Edit User: fix share access assignments webgui: Management page: remove UPnP conditional setting webgui: Escape shell arg when logging csrf mismatch webgui: Terminal button: give unsupported warning when Edge/MSIE is used webgui: Patched vulnerability in auth_request webgui: Docker: added new setting "Host access to custom networks" webgui: Patched vulnerability in template.php
    4 points
  3. Sometimes Limetech uses stripped versions of packages, for an example, ncurses doesn't have the correct terminal library (screen) for tmux, so or I change the terminal type or I overwrite the stock package. Libevent too. I've become strict about Unraid versioning because of these stock packages, but I'll implement a better way of dealing with those incompatibilities. By the way, 6.8.1 support is implemented.
    3 points
  4. I started my unraid journey in october 2019 trying out some stuff before i decided to take the plunge and buy 2 licences (one for backups and one for my daily use). I am working with IT and knowing how much noise rack equipment makes at work i had no plans what so ever to go that route. My first attempt at an unraid build was using a large nanoxia high tower case where i could at max get 23 3.5mm drives into, but once i reached around 15 drives the tempratures started to become a problem. Also what i didn't see was how incredible hot the HBA controllers become. I decided to buy a 24bay 4u chassi to see what i could get away with just using the case and switching out the fan wall with noctua fans and such to reduce the noise levels. I got a 24bay case that supports regular ATX PSU because i know server chassis usually go for 2U psu's to have backups and they sound like jet engines so i wanted none of that. Installing everything in the 24bay 4u chassi the noise levels were almost twice as high compared to the nanoxia high tower case even when i switched the fans out and used some nifty "low noise adapters". Next up was the idea of having a 12U rack that is sound proofed, is it possible? How warm will stuff get? What can i find? I ended up taking a chance and i bought a sound proofed 12u rack from a german company called "Lehmann", it wasn't cheap in any sense but it was definitely worth it! I couldn't possible be happier with my build. From top to bottom: 10gps switch AMD Threadripper WM host server Intel Xeon Unraid server Startech 19" rackmount kvm switch APC UPS with Management Card In total 12u of 12u used! Temps: around 5c higher than room temprature, unraid disks average of 27c. Noise level? around 23db, i can sleep in the same room as the rack!
    2 points
  5. Awesome! Bring on 6.9 with that new 5.4 kernel
    2 points
  6. Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\192.168.1.102 You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
    1 point
  7. Overview: Support for Docker image unraid-api in the electricbrain repo. Docker Hub: https://hub.docker.com/r/electricbrainuk/unraidapi Github/ Docs: https://github.com/ElectricBrainUK/UnraidAPI Discord: https://discord.gg/Qa3Bjr9 If you feel like supporting our work: https://www.paypal.me/electricbrain Thanks a lot guys, I hope this is useful!
    1 point
  8. I've started to create a dashboard section for ZFS details. let me know if you can help me out with PHP/JSON/BASH I don't really know what i'm doing just trial en error, modifying an existing plugin (corsaircpu, status.php & status.page) https://github.com/ezraholm50/zfs-frontend-unraid
    1 point
  9. The plugin is fully functional; there were problems during Unraid 6.8.1 update because the plugin wasn't updated for it, but it's working right now.
    1 point
  10. Bitch and complain to Crucial. According to them and the other reports posted here and elsewhere, pending sectors on some of their lines of SSDs are normal and par for the course, and will happen continually. Whether its a firmware issue or actual pending sectors, it doesn't exactly inspire confidence in me.
    1 point
  11. Looks like you have too many digits in your LAN network setting. 10.0.0.1.0 isn’t valid. Judging by the error message you posted it should probably be 10.0.0.0/24 if your server address actually is 10.0.0.114.
    1 point
  12. To proper handle unsupported status, I'm thinking in display a message in the webui and disable the plugin functionality until proper support. What do you guys think?
    1 point
  13. @J05u Make sure you edit the correct line in the XML! You can easily passing the wrong device to the VM which can cause the whole server to freeze. Maybe try to setup a new VM as Q35 for testing purpose. It's safer this way I think. Maybe posting some error logs and the XML and some more infos might be useful for people to help you.
    1 point
  14. thank you. i managed to work it out in the end. but thank you so much for replying
    1 point
  15. Well, it's taken me ages but i believe i've cracked the nut. It forced me to learn much more about Linux and i had to (sort of) learn Bash scripting. I now have a fairly comprehensive backup strategy working. I won't go into details, but here's a summary: Borg backs up my Linux desktop to unRAID via SSH (using the backup-user -- thanks to the SSH plugin) and using systemd to schedule my bash script. It's quite comprehensive and provides desktop notifications of the status of it. It also prunes old backups. unRAID has a scheduled cron job (thanks to the User Scripts plugin) that then switches user to backup-user and runs another script that performs health-checks on the repository without breaking the permissions! These take a like 2hrs for 400gb repository, so that's why i want it to run server-side, without the client having to do anything. The script notifies my email via Notifications of the success, warning or failure of the script. Not yet implemented, but down the track my unRAID server will SSH to an offsite repository using the backup-user to complete the 1-2-3. I've learned so much about Backups in general, BASH, systemd and unRAID during this exercise. I may try to document it and share, particularly my scripts, which for a total noob i'm quite proud of. It'll take me a while to document it. Thanks to the people who tried to help through this thread, particularly @Can0nfan who provided awareness of the time-saving SSH addon. I believe i could have done what it does, but it would have taken me longer. 🙏
    1 point
  16. And I refer people to my last lengthy post on here to help put this in perspective how much work this takes. And on a daily basis me and the other linuxserver guys are dealing with the other stuff we do, ie docker, and trying to answer support stuff. Just saying.....
    1 point
  17. Now we’re cooking! The temperature returneth! Great work. Thank you for seeing this through! [emoji1303] —Sent from my iPhone using Tapatalk Pro
    1 point
  18. I’m in! Cheers! Now to figure out how to use it! Thanks for your advice, much appreciated. Hope you have a pleasant weekend away from home. 👍🏻
    1 point
  19. Do three screen captures of each of the three bold descriptions on the above quote of your post. (Saying in words to describe something in WIN10 is usually confusing and everyone has a different idea about what is meant!) What type (Home, Pro, etc.) and version (type winver in the taskbar search box which will provide the option to run it in a cmd box) of Windows 10 that you are currently using.
    1 point
  20. Meh, not to start a war of the OSes, but I don't think FreeBSD's major calling card is networking performance. Here's a study from 2019 showing FreeBSD holding its own, but not dominating Linux in any capacity: https://www.phoronix.com/scan.php?page=article&item=windows-linux-10gbe&num=4 And unless I'm mistaken, even Cisco UCS systems that support 40GBe+ connectivity don't even support FreeBSD. The main benefit for FreeBSD over Linux is it's licensing model. CDDL allows developers to advance the platform without having to contribute any improvements to the code base back to the FreeBSD maintainers (if they don't want to). This is why so many companies make hardware appliances that use FreeBSD for their base OS (they can fine tune their product for maximum performance and then charge a premium for their efforts without allowing someone to just rip off their code that they spent time creating). CDDL also allows for compatibility with Oracle's licensing on ZFS, which is why Oracle has never sued FreeNAS (and frankly I bet that was the deciding factor that led FreeNAS to use FreeBSD in the first place). Don't let any of this come across as me negating your reasons for needing FreeBSD though. A job's a job and if you need it for your job, well, then you need it. That said, I can't imagine the FreeBSD developers can really afford to let this issue linger. It's not like Linux KVM is a small platform anymore and I have to imagine the vast majority of FreeBSD users are using it in a VM (not bare metal). I could be wrong.
    1 point
  21. Its for this kind of issue that binhex isolated preclear in a docker.
    1 point
  22. I'm going to go out on a limb here and speak for @CHBMB. This plugin (and the DVB plugin) are built and maintained by him on a volunteer basis and it isn't always convenient for him (or anyone else) to continually keep up with every release as soon as it's released. There is significant time required to build and test each and every release which isn't always doable because of other commitments which are probably far more important. Cut him some slack. To be honest, at times I'm surprised due to the time required for this that he even bothers to support RC's.
    1 point
  23. Cloud Commander has the best UI of the file managers I've tried. Is there anyway to password protect the web access from curious users?
    1 point
  24. Hello; I've a question about Bitwarden, since the one of your repo is flagged as deprecated. How can I migrate from the deprecated one to the new one? Thanks in advice 🙂
    1 point
  25. Power or connection problem, they show up similarly on the logs, so could be either. You can do that by doing a new config and checking "parity is already valid" before array start, btw were all those sync errors expected?
    1 point
  26. Not yet, we'll need to wait. Taking the opportunity to move this to stable reports since it's still an issue.
    1 point
  27. No, 6.8-stable do not have suitable linux 5.x patches, it takes lots of risk to make it.If you wanna using this kernel, revert to latest version of unraid which support 5.x linux (like 6.8rc5 which I'm using).
    1 point
  28. Very unlikely that two disks failed at the same time, but as to your question and since you already have two disable disks, the risk of rebuilding one or two is the same, since if another disk fails during the rebuild (of either 1 or 2 disks) you might lose data, so IMHO if you're going to rebuild do both at the same time.
    1 point
  29. Change: /boot/custom/etc/rc.d/S20-init.rsyncd to bash /boot/custom/etc/rc.d/S20-init.rsyncd It's because of the new flash drive security features, it's on the release notes.
    1 point
  30. It is mentioned in the 6.8 release notes that the security of the flash drive has been tightened and files located there can no longer have execute permission. Option now available are: Add the command before the script name. E.g. bash scriptname copy the script elsewhere and then give them execute permissions use the User Scripts plugin to run the script.
    1 point
  31. Yay, finally! If anyone else runs HomeAssistant and unRAID, this is basically a one-way MQTT bridge from unRAID to HA. You can control containers, VMs, USB hotplugging, etc. Please help with testing and feedback!
    1 point
  32. Can you explain your problem? Is it locking up at boot, or randomly during use? With my 5700 XT, I would randomly crash unraid during gaming or even just using chrome in the win 10 vm. I made a ton of changes, but I finally got it stable. I settled on 6.8.0-rc5 with the kernel from the first thread. I added a second GPU in slot 3 I have set in my bios as the initial video device. With this setup I no longer have to pass a vbios to the vm. I also updated to Adrenalin 2020 drivers. With those changes, instead of locking up the entire host I would "only" lose signal where I was previously crashing. I could hear game audio continue, but the only recourse was the force stop the vm. The final fix was to DISABLE Radeon Anti-Lag and Radeon Enhanced Sync in Adrenalin. With that final change I am 100% stable and have no problem restarting my vm. That's a long way of saying check if you have Radeon Anti-Lag and Radeon Enhanced Sync on and turn them OFF. They default to on, at least in Adrenalin 2020. The second gpu might not be necessary, it might have just changed the behavior from crashing all of unraid to only having for force stop the vm. Also, use q35-4.0.1 (or newer) if you want gen4 pcie speed without xml changes.
    1 point
  33. this should be folded into the main dev branch and included by default i think...
    1 point