Leaderboard

Popular Content

Showing content with the highest reputation on 06/23/20 in all areas

  1. Hi guys, i got inspired by this post from @BRiT and created a bash script to allow you set media to read only to prevent ransomware attacks and accidental or malicious deletion of files. The script can be executed once to make all existing files read only, or can be run using cron to catch all newly created files as well. The script has an in-built help system with example commands, any questions let me know below. Download by issuing the following command from the unRAID 'Terminal' :- curl -o '/tmp/no_ransom.sh' -L 'https://raw.githubusercontent.com/binhex/scripts/master/shell/unraid/system/no_ransom/no_ransom.sh' && chmod +x '/tmp/no_ransom.sh' Then to view the help simply issue:- /tmp/no_ransom.sh Disclaimer:- Whilst i have done extensive tests and runs on my own system with no ill effects i do NOT recommend you run this script across all of your media until you are fully satisfied that it is working as intended (try a small test share), i am in no way responsible for any data loss due to the use of this script.
    3 points
  2. A discounted 2nd or 3rd license was offered a few years ago in the major transition from unRAID v5 to v6 and I bought a couple of extra licenses. Since then, that has not been offered, but I would not hesitate to buy a couple of extra licenses at full price if a need arose. For my purposes, at the current price levels unRAID remains a bargain even in comparison to free offerings (yes, a discount would still be nice, but, hopefully not an absolute necessity. 😀)
    2 points
  3. Please note: The author of the application is aware that the software needs rework. This is ongoing at the moment, so if you find the application to be a bit hard to work with, maybe you want to be a bit patient and wait for the rework progressing. As it is now, with fiddling the app can be made to work, but it's not very smooth. I still think this application provides a great benefit to the community of doujinshi collectors and am not going to pull the (marked as beta in CAs as before) template for it. Thank you and happy reading! _______________________________________ Hello and I am glad you're stopping by to check out this support thread for my CA template for HappyPandaX. From the application's author: This application itself is released in alpha-stage by the author themselves and this CA template is in beta. I would appreciate if you could help me test this application. There are some known issues, e.g. downloading from Exhentai is limited to being logged in (go into About > Plugins and install the plugins that are available for a stock installation, click on "Open Plugin Site" for the EHentai Login plugin. Follow the instructions and help from there) and having credits for downloading available. As for downloading from other pages, so far no luck for me on nhentai, however scraping metadata for existing doujinshi is working fine for me. You can stage your scans and metadata scrapes before you commit them to your collection. (Add > Scan) GitHub Link Application Author's Patreon, if you wanna support them Documentation The creator also is an artist and links to their various outlets on Patreon and Twitter. Support for this CA in the context of the CA implementation is provided by me, however do keep in mind the application is alpha, the CA beta. Change Log (just the unRAID CA template) 2020 June 1: Initial release to CAs Cheers and happy organizing!
    1 point
  4. @uek2wooF may try the prepare function, add a custom vm and use the xml form which is created by macinabox docker dont forget to create your vdisk then too
    1 point
  5. What did it actually say? The -n (nomodify) flag means check but don't repair anything.
    1 point
  6. When you provide us with an excellent build, I turn around and serve it to a dozen others who are ITCHING to play on the new update. All I'm saying is this is kind of your fault for making this work so well. ;P I'll tell them to cool their jets and wait a few hours. Thanks again.
    1 point
  7. I would start here. If your USB flash drive is not recognized or drops offline during boot, that will cause networking problems as the system starts. It might be helpful if you posted your diagnostics zip file (type 'diagnostics' at the command line since you cannot get into the GUI). The file will be saved to your flash drive and you should attach it to your next post. That will likely show whether or not the flash drive is being dropped during the boot process. Just to make sure there are no errors on the flash drive, remove it from your server and let Windows run a chkdsk on it in another machine. Go into your BIOS and make sure the flash drive is specified as the first (and on some motherboards, the only) boot device. On mine, since I am booting UEFI, my one and only boot device is UEFI:[name of flash device]. For legacy boot it might be something like USB:[name of flash device]. If your motherboard has USB2 ports, put the flash drive in one of those on your server. USB2 is more reliable than USB3 on many boards. After sorting that out, you can see if it has any affect on your network problems. If not, you can go on to the next section. On the flash drive the network configuration is stored in /config/network.cfg. It is a text file you can edit if you want to mess around with manual changes. Just deleting /config/network.cfg (or renaming it if that makes you nervous) will force a new one to be created with defaults on reboot. If you have only one NIC now and you have a /config/network-rules.cfg file, try renaming it so it is not found (or deleting it) and let another one be created on boot if needed. Changing boards and thus, available NICs for unRAID to use can cause some confusion for unRAID as it is looking for an interface that is no longer present. This is especially true if you go from two Ethernet interfaces to one and you had bonding of the two NICs on your prior system enabled. Hopefully, you will not need to manually edit anything once you get the above issues resolved. If you do, the information below may be helpful. Here is what /config/network.cfg looks like on my system with two NICs (eth0 and eth1) with no bonding; unRAID GUI is accessible on Eth0: # Generated settings: IFNAME[0]="br0" BRNAME[0]="br0" BRSTP[0]="no" BRFD[0]="0" BRNICS[0]="eth0" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.1.10" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" METRIC[0]="1" DNS_SERVER1="1.1.1.1" DNS_SERVER2="1.0.0.1" USE_DHCP6[0]="yes" DHCP6_KEEPRESOLV="no" DESCRIPTION[0,1]="Dockers" VLANID[0,1]="3" PROTOCOL[0,1]="ipv4" METRIC[0,1]="2" VLANS[0]="2" IFNAME[1]="br1" BRNAME[1]="br1" BRNICS[1]="eth1" BRSTP[1]="no" BRFD[1]="0" PROTOCOL[1]="ipv4" USE_DHCP[1]="yes" METRIC[1]="3" SYSNICS="2" I have a static IP address assigned to eth0 and have DHCP turned off to ensure the right address is assigned to that NIC. Another way to do it is through a DHCP reservation in your router. On eth1, DHCP is enabled. Also, you may have a /config/network-rules.cfg file from your last motherboard with two NICs. Mine looks like this: # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="d0:xx:xx:xx:xx:02", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="d0:xx:xx:xx:xx:03", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" Note how the designation of which NIC is eth0, eth1, etc. is done by MAC address. This is normally done in the GUI. If you have this file, the designation of which physical NIC is eth0 by MAC address may be incorrect. I recently changed motherboards and all this was taken care of automatically with no manual editing on my part; however, I went from a motherboard with two NICs to another motherboard with two NICs and no change in network configuration so it all just worked after the upgrade.
    1 point
  8. damn you guys are quick!, my automated script only picked up the change 30 mins ago, there is a 1 hour delay as i have found the url is not updated quick enough sometimes on mojang website, so it should start building in the next hour from now. edit - it just triggered this minute!, so expect new image in around 30 mins.
    1 point
  9. The key point is that there is no /boot mount point listed which means UnRAID did not locate the USB drive during the later stages of the boot process. As a result no settings were applied, and various drivers could not be loaded. Quite why I do not know not being an ESXi user, but that might at least give you a starting point for further investigation.
    1 point
  10. This point always needs emphasis. Limetech accomplishes a tremendous amount for their size.
    1 point
  11. the bad news: https://www.spinics.net/lists/netdev/msg658594.html the good news: the vendor supplied r8125 drivers does build correctly against 5.7 kernel, but we are testing. Realtek has always been a problem with Linux, I'd avoid them if possible.
    1 point
  12. flagged out of date on AUR today, as soon as this is updated the build will happen automatically:- https://aur.archlinux.org/packages/minecraft-server/
    1 point
  13. Very interesting. It should not be underestimated that not only a server can centrally host files and services, it can also have it's own locked-down policies that you wouldn't necessarily like to set on your desktop, because it might hinder your workflows too much. Talk about truly combining the strengths of both. Add backups to the mix and you're looking at incredible data protection. Now if we could have true bit rot protection in unRAID baked-in, that'd be incredible...
    1 point
  14. I have copied the report to this thread.
    1 point
  15. The plugin is included in 6.9 beta22+ and should not be installed in those versions
    1 point
  16. I would suggest that an easy first step is to check would be to login at the console command and use the ‘df’ command to see if the Unraid USB drive is being mounted at /boot. If not then the problem is relayed to Unraid not being able to find and mount the USB drive during the boot process at the point when settings are loaded and applied.
    1 point
  17. Screenshot from yesterday morning: https://imgur.com/IgsKqLy - yep, same as you. But I'm currently at 1331 CPU time hours.
    1 point
  18. Normally those are partitions on the same drive and temperatures would be redundant. This is not a typical UD disk setup.
    1 point
  19. Yes, I have this kind of setup. You can externalise hard drives in a specialised 'disk shelf' chassis, or simply use any suitable PC case you have lying around. The systems are usually connected through a SAS controller (HBA) in the main system. The controller will have internal or external SAS ports that are typically connected to the target / external drive array using something like a Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter. unRaid will see these external drives just like regular drives. You can add them to the array, use them for parity, cache or as unassigned disks. You might be interested in this blog post where I touch on the idea. It can get a lot more complicated with expanders, disk backplanes etc., but in basic terms, what you envisage is possible, and quite common. Search on here or the interweb for 'disk shelf', 'hba', 'SAS controller', 'backplane' to get started.
    1 point
  20. Whole lot of information in this thread: Basically people are seeing writes many many times higher then it should be when docker / appdata is stored on a BTRFS drive. Some are reporting TB's written every day (think someone said 20TB a day!), the SSD's overheating due to all the writes and using up SSD's warranty write period in a matter of months. I didn't have the extreme writes some had but with docker and appdat on the btrfs drive I was seeing 7GB/hour and climbing over time. Moving docker and appdata to an XFS drive dropped writes to ~200mb/hour and holding steady / dropping over time.
    1 point
  21. I think @bonienl is going to detail things after the "design guide" for plugins and multi language is finalized and some procedural process for submitting translations on plugins gets worked out. And no, it doesnt break anything for backwards compatibility, as you can tell since most of my plugins and all of the dynamix ones (and UD) support multi language. (But I went about backwards compatibility slightly differently than the official method)
    1 point
  22. One use case I will be using it for off the bat would be having a separate cache for docker and appdata formatted as XFS to prevent the 10x - 100x inflated writes that happen with a BTRFS cache. It is also a way of adding more then 30 drives if someone needed that. A second cache pool could be used as a more "classic" NAS with raid and apparently possible ZFS support in the future, really pushing into freeNAS territory there. Or simply setup cache pools based on usage and speed needs. For example a scratch drive that doesn't need reduntacy with a raid 0 setup on less trustworthy drives. another high speed cache with NVME drives for working projects. Then a high stability pool for normal writes to the array caching using raid1 and very good drives that has very low chance of failure. Just the first things that came to mind, If they make it, people will find uses for it that is for sure. For example this makes a tiered storage system fairly easy to implement in the future, this is a use case I would use for sure. Tired storage will move recently / frequently used data to faster storage pools and less used or old data to slower tiers automatically.
    1 point
  23. I have 3 original HGST, a 500mb DeskStar from around 2005-2007, I don't remember exactly, and two 3TB UltraStars. They have been bulletproof and solid performers. I cannot speak to what has happened to them after WD acquired them, they may be the same high quality drive, and I would expect that, but I have not used any of the WD variant.
    1 point
  24. I added those patches for next release.
    1 point
  25. 1 point
  26. So I have this working with something I wanted to CRON but is there a way to tell this schedule is setup? cat /etc/cron.d/root
    1 point
  27. I did this morning. While it's still very early, I think this may finally be fixed: Screenshots here: https://forums.engineerworkshop.com/t/unraid-6-9-0-beta22-update-fixes-and-improvements/215 I am seeing a drop from ~8 MB/s to ~500 kB/s after upgrade with a similar server load (basically idle) and the same Docker containers running. Hopefully the trend holds. -TorqueWrench
    1 point
  28. Well - my entire Unraid journey thus far has been to wait for a SpaceInvaderOne video, copy what he did, be happy with the results because it always worked first time. I wish I knew this video series was on the way otherwise I wouldn't have spent 3 hours last night trying to get deconz working!! I'm so excited to see this series! During the lockdown home automation has been my number 1 thing to keep me sane, I've tried a CC2531 Zigbee stick, bought loads of Xiaomi sensors and just put a new Sonoff BasicR3 one in to the network. @SpaceInvaderOne, I saw your docker and tried it - it worked well until I tried to get it linked with HA, but then it wasn't able to get an API key. I ended up using the one out of the docker hub with this post to fix my integration issues with the home assistant core docker. I needed to set loads of environment variables and also the "dialout" usermod setting to get it working properly. I don't know if it's worth you adding those into your docker image too? I used the command ls -l /dev/serial/by-id to find out what my conbee2 was using. lrwxrwxrwx 1 root root 13 May 30 02:40 usb-dresden_elektronik_ingenieurtechnik_GmbH_ConBee_II_DE2195326-if00 -> ../../ttyACM0 This is what I ended up with in my docker settings (obviously I had to remap the port from the usual port 80 to some other random port).
    1 point
  29. @pm1961 Suricata updates the snort rules depending how often you have set it up to check for new rulesets. It constantly gets updates, filters are added and removed. Maybe one of the newer sets there is a rule that detects parts of the eco communication as malicious and blocks them or drops the packets. I'am using Snort but it's basically the same as Suricata and from time to time I have to adjust the filters and whitlist some. I guess Suricata has a alerts or warning page where you will find the specific rule which gets triggered.
    1 point
  30. I'm a tad confused, why do we need to join their poll if we can vote here? Though I'd like to point out, if this is being considered a serious poll being taken on board by the unraid team, I'd much rather see it handled completely on your site, here on the polls. I've left facebook long ago. Not an issue for most, I agree, but for myself I don't wish to use a social service that has such massive disregard for it's members.
    1 point
  31. I started understanding how to put this all together and wanted to throw some info out there for those that need it. First of all, I would recommend that if you are going to use the Vanilla version of MC (what binhex has provided) make sure the docker not running browse out to your appdata\binhex-minecraftserver\minecraft folder edit the server.properties file with notepad++ (I'm using Windows for all of this) Change the following settings if you like: difficulty=[easy|hard] gamemode=[creative|adventure|survival] force-gamemode=[true|false] level-name=world <=== This is the folder name all your game data is saved into motd=Logon Message <=== Message displayed when you log into the server from a MC client Now, if you are like me, you want to use Forge or Bukkit. In this case create a folder on your C:\ drive called "Minecraft" download the minecraft server file from HERE, and place it into C:\Minecraft (believe it's called 'minecraft_server.1.14.4.jar') double-click the file, and wait for a minute as it downloads some MC server files. When it stops, edit the EULA.txt file, and change the line inside from false to true eula=true Double-click on the minecraft_server.1.14.4.jar file again and wait for it to finish type in "/stop". This will kill the minecraft server. Download forge for the version of MC server you just downloaded (You want the INSTALLER button in the recommended box on the site) Place this file (forge-1.14.4-28.1.0.jar) in C:\Minecraft Double click on this file. Select SERVER and change the path to C:\Minecraft Let it perform its magic Once finished, again, shut it down with "/stop" Now copy the contents of C:\Minecraft to appdata\binhex-minecraftserver\minecraft Delete the file appdata\binhex-minecraftserver\perms.txt (this will restore the default permissions to the files you copied over) In Unraid, edit the docker and create a new variable Click SAVE and then APPLY/DONE Fire up the docker This will use the forge jar file within the docker container, instead of the vanilla jar file. From this point, if you want to add resource packs or mods, you can download them and install into the "mods" or "resourcepacks" folder as necessary. These folders may need to be created. A good mod to verify that your server is working is FastLeafDecay-Mod-1.14.4.jar. You can find it HERE. Chop a tree down and it should dissolve a lot quicker than normal. I would also recommend adding one or two mods at a time and testing. Let me know if you'd like more details on the above.
    1 point
  32. So I've tested several firmware version to track down where the issue was introduced and the firmware that I was able to get this working correctly is P16. (LSI HBAs LSI SAS2308/SAS2008) To those who are seeing this error, you might want to downgrade to P16 firmware and your FSTRIM would work with your SSDs (obviously if it is supported). Tested firmware version: P14 -> YES P15 -> YES P16 -> YES P17 -> NO P18 -> NO P19 -> NO P20 -> NO Cheers. Guide and files for 9201-16i and 9207-8i: lsi-flash-efi.zip NOTE: You need to boot the efi shell attached in order to proceed with the flashing process.
    1 point