Leaderboard

Popular Content

Showing content with the highest reputation on 11/05/20 in all areas

  1. Mit Hilfe von @ich777 und @zspearmint freue ich mich, unraid.net auf Deutsch zu enthüllen. 😀
    3 points
  2. I hope my experience will help some others. Especially if you plan to buy a embed CPU motherboard with intel celeron etc... My objective was to build a silent and low watt usage HTPC for home usage. In europe (I am in France), 1W / year cost 1€ / year. So if I build a configuration at 250W, the cost is 250€ per year. Which is amazing just for plex !! netflix + amazon prime are cheapest per year My usage is at 90% for plex/emby & seedbox, and 10% nextcloud and VPN. So the first thing I did was to buy a HTPC case at HDPlex. I have chosen the H5 gen 2 case : https://hdplex.com/hdplex-h5-fanless-computer-case.html Not cheap, but beautiful. I see it everyday under my TV so, it is important. I do not need a lot of power and speed. Most of the time I am the single user of this server. But occasionaly, we can be 2 or 3 at the same time watching videos. So do I need fastest disks, no! Latest CPU, no! 10Gb/s network, no! critical time fo response, no! big cache, no! an old 256GB SSD sata witll do the job. For my first configuration I bought a all in one configuration with intel j5005 ITX motherboard with 4 sata disks 2.5 1000GB. Very cheap motherboard, around 110€. Disks can be found easely on 2nd hand market, everybody replace 2.5 hdd in laptop by SSD. You can find brand new 2.5 1TB disks for less than 30€. 2.5 disks are slower than 3.5 but very silent and just consume a couple of watt during usage and around 0.1w during iddle. Perfect for my usage It worked just fine with 4 disks. Disks were used at their maximum speed (average 90MB/s, max 120MB/s, min 60MB/s depending on where you read/write on disk) with the 4 onboard sata connectors. The CPU has enough power to run everything easely simultaneously. I totally recommend this configuration if you just plan to use 4 disks. And if you case can, because my hdplex h5 is bit small, but difficult to use more than 4 disks in (not impossible). But very soon, I needed to add more disks and I jumped to 6 disks then 8 disks. It was the beginning of a lot of issues. First and the biggest problem, celeron J5005 have limitation in terms of pcie lanes. Just 6 are available. 2 are used by satas and 1 for pcie x1 onboard connector. all the 3 others are used by network card/ usb etc... So more disks I add, more this single pcie lane is shared. So whatever I can do, only 1 pcie lane is used for ALL disks after the 4th. With 6 disks, the read/write speed was around 80Mb/s. With 8 disks, read/write speed was around 50/60 MB/s etc... etc... It was the time to invest to a real SAS controller (LSI 9xxx series) HBA pcie x4 etc... I found it easely on ebay for $20 and wait 3 weeks to receive it. This card was not usable with my motherboard which just had 1 pciex1 extension slot. So let's buy a new motherboard/CPU! And my brightest idea of the year was to buy a J4105 celeron embed motherboard in mATX format with 1 pcie (2.0)x16 slot and 1 pcie (2.0)x1 slot. With the pciex16 slot, I can use my new LSI card and really use this powerful controler to stop sharing a pcie lane (2.0) x1 accross all disks. But of course it did not work at all, the fucking pciex16 lane run at pciex1 speed (it was the time to read intel specification on intel.com and asrock specifications). So the situation was exactly the same than before but lot more expensive because I bought the LSI controller, new cables and and new motherboard.... for nothing...congrats me !!! Thanks EBay and leboncoin, I was able to resell this unusable motherboard and not loose a lot of €. Maybe you ask why I wanted to change my configuration. I said just before I do not need fast disk, fast cpu etc... so why do you do that. It was slower, but acceptable you think !! Response: with 8TB on 8 disks, the weekly check disks run for 16 to 20 hours !!! downloading torrents at 100MB/s use almost 100% of the bus speed of my pcie lane. My configuration was a single thread server. Even pihole was slow during disks checks or heavy torrent download, and sometimes not able to respond DNS in time Everything was dependant to this pciex1 lane used by ALL processus on my server. From a very usable, cheap and low cost server with 4 disks, I jumped to a nightmare server, most of the time in idle trying to deal with existing process instead of responding new ones !!! Moving everything to cache helped a bit BUT my ssd cache is on a sata port, so it helped just a bit... or not at all, difficult to say. After this amazing experience of doing 2 times the same error, I planed to really build an extendable configuration. Instead of buying embed intel 10Watts platform with celeron, I have chosen intel T series at 30W max. It took me weeks to find a good opportunity at the right price. Not a lot of people sell them. I bought an intel I5-6400T for 60€. then I bough a brand new motherboard GA-H110M-S2H for 50€ with 1 real pcie (3.0) x16 lane and 2 pcie(2.0) x1 lane. Now i can really use my LSI controler. After all this changes and deceptions, it was also the time to maybe stop using the SSD sata cache disk. I had a SSD nvme 512GB toshiba from an old laptop sleeping somewhere, so I bought a pciex1 extension slot for nvme (aliexpress $8, 3 weeks delivery). And then the dream came true. ALL was running perfectly. Today with 8 disks, I do not even use onboard sata controllers. All disk are plugged on my LSI 9211 controler. All running at their maximum speed and the pcie(2.0) x4 of the controler has enough bandwith to ingest all actions (plex/torrent/pihole etc..) simultaneously on different disks. Using a disk cache on a pcie (2.0) x1 lane is also a better idea than cache on sata. The bandwith is lot larger and has direct access to CPU and bridge. I have noticed immediately a BIG improvement when I switched. So today I have a working configuration, with 8 TB on 8 disks. Very silent and low watt usage (40-45W on load, 29W idle). With the help of powertop and ASPM enabled, I reduce the watt usage to 25W on idle. I can say that the cost of intel T series + motherboad is the same than a brand new celeron J5005/J4105 embed on motherboard. So DO NOT BUY an intel celeron embed if you imagine using more than 4 disks... or use 3.5 disks so you can really increase your storage without adding new disks. (but bye bye silence and low cost usage) The LSI 92xx controler is perfect. Must have and cheap for more than 4 sata disks. Sata controler on pciex1 lane are not that bad, but completely struggle the bandwith available with your disks. If you really have simultaneous process, you feel immediately the difference. Same for cache, prefer a SSD on pcie extension slot. Bandwith is lot higher than on sata. I saw immediately a big difference when I switched to this cache. Overall the configuration, without the case, cost less then 350€ for 8 disks, CPU/motherboard/ram and LSI controler + pcie nvme cache. The case cost 300€ !!! but I cannot deal with design (and my wife asking me what is this ugly box under the TV). I added recently a DVB-T tuner card to record TV contents. Works perfectly. I recommend the TBS 6281 SE. Perfect to use with plex. Hope it helps.
    2 points
  3. All of us at Lime Technology are very excited to announce Larry Meaney as a new full-time hire. Larry has joined us as a Senior Developer/Project Lead. Here's a little more about Larry: Please help us give Larry aka @ljm42 a warm welcome!
    2 points
  4. All setup now. Unraid already supported rsync. I just had to configure the QNAP remote end with correct user share credentials, SSH, encrypted port number and fiddled with QNAP drop down menu options and after a few options were tried, on 4th QNAP option, the qnap could see UnRaid shares. If others interested, screen snip below of what options got working for me
    2 points
  5. yes, dont touch the port, 1337 is pia's choosen port for wireguard, port 1198 is used for openvpn.
    2 points
  6. This was very helpful thanks. Got me sorted out completely!
    1 point
  7. Simon That is correct and normal setup
    1 point
  8. In the top left of the page there's a "What Is Cron" line. Hover over this for a popup of the interpretation that this follows. Took me a couple of goes to get mine working since there's a few different ways to express the same thing, and not all Cron systems accept the same structure.
    1 point
  9. I have a cron job that runs every Monday at 1 am. In User Scripts it is 0 1 * * 1. Yours should be 0 2 * * 4 (perhaps THU also works but I use the ordinal number). You need a space between each element as well.
    1 point
  10. I was planning to do the mapping anyway as i prefer to add maps, but happy for it to auto add if people haven't changed the defaults which is the way it will function at present. I am including pscsi as I want to use rom drives which cannot be mounted asblocks. Will add Ramdisks in future if people want them.
    1 point
  11. I have just received My 2x retina 27 inch monitors from dell and I am breathless.. macos wont perform any good or look Better in any apple hardware in existance today Im running side by side macos on My 5700xt and windows 10 on My 3080 could not be happier
    1 point
  12. If I stop torrents... and wait 15mn. I am at 26.6W. 8 drives on the LSI + ssd cache on sata and you are exactly in the same configuration than I was a couple of months ago. I have also tried 6 drives on the LSI + 2 paritys drives onboard sata + ssd onboard sata, you will improve a bit performance, but not that much. Especially during heavy load or parity check, you do not see any difference. But at least process stops hanging haha. For my use case, using a j5005 or i5 6400T is at the same cost of setup, cost 8 watts more for usage (so 8€ per year). But in the second case I have a NVMe drive for cache, an LSI HBA and a TBS DVB-T tuner card. You know I wonder if the T series is more "adaptative" than the J series. Maybe power usage of the CPU can fall down more with a T series than with the J series (which maybe stay always around 8 to 10W). I will add 4 new 2.5 1TB drives in a couple of weeks. When you start using unraid for NAS, you cannot stop. LOL next step is of course to use 2TB drives 2.5, they are becoming affordable. I have also post a pic showing my HTPC server under my TV. If people wants to know how it looks. great, I take a look at powertop !! very interesting !!
    1 point
  13. Just FYI The NWN server is broken if using the 'latest' NWN because the new 'latest' download is not https://github.com/nwnxee/unified/releases/download/buildlatest/NWNX-EE.zip but https://github.com/nwnxee/unified/releases/download/latest/NWNX-EE.zip
    1 point
  14. I think I'm home in about 30 minutes, please send me a short PM and I will send you a link when it's ready to download.
    1 point
  15. Avec l'aide de @Pducharme et @zspearmint, heureux de vous présenter unraid.net en français. 😀
    1 point
  16. Effectivement c'est mes addons sur firefox qui font n'imp
    1 point
  17. Do you write prose professionally? If not, why not? Your story construction and execution is excellent. Also, welcome! Feel free to ask for help any time you feel you may be treading too close to the sand with your flippers. We are slowly but surely attempting to iron out the worst rough spots in the user experience, any feedback on things that could use some polishing is helpful, especially from the perspective of fresh meat.
    1 point
  18. FYI: I am not sure that v6 will now even run without problems with only 1GB of RAM. Even with 2GB certain functions (such as Unraid upgrades via the GUI) are prone to fail so 4GB is probably the practical minimum you should aim for with v6.
    1 point
  19. You can't remove/add new devices to the array without a new config, but if you're going change the array config then parity won't be valid, so best to just no assign one, do the new config normally, assign all the data disks you need, don't assign parity.
    1 point
  20. If rebuild completed before it won't start another, i.e., if all disks have a green ball. If needed you can always do a new config with parity and just check it's already valid before array start.
    1 point
  21. You need to start in normal mode, or the drives won't mount. 1GB is not much for v6, but just to copy the data should be OK.
    1 point
  22. Le footer est placé normalement chez moi (Chrome). Je fais une passe sur le texte et je t'envoie ça @romainromss avant d'envoyer à Spencer.
    1 point
  23. Du öffnest die WebGUI, gehst auf Einstellungen und dort siehst du AFP (Apple Netzwerkdienst) und SMB (Windows Netzwerkdienst). Diese musst du aktivieren, damit deine freigegebenen Ordner im Netzwerk sichtbar werden. Bei den Ordnern selbst musst du dann noch die Rechte der jeweiligen Nutzer vergeben, die du hoffentlich hinzugefügt hast
    1 point
  24. Sorry, missed the "killed part". Most likely the issue, you need about 1GB per filesystem TB.
    1 point
  25. Well, damn. I feel like a right idiot. Thank you both! Got it working. I'll go and hang my head in shame over there -----> somewhere...
    1 point
  26. yeah we have fixed that part and added a few features to make it a bit more user friendly also added a bit of ajax to the back end to clean some code up, hope your alright with that.
    1 point
  27. Yeah, it looks like implementing logs would require changing the way the base server is implemented, which is beyond the scope of my involvement with the project. Sorry about that. I will raise the issue of logging with the rest of the dev team though.
    1 point
  28. See Q19 and Q22 in the FAQ to get up and running: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  29. For anyone having the same problem anyway, the fix for me was this Just replace the PHP_VER with whatever you have on Settings -> System (Bottom of the page) PHP_VER="7.3.24" && \ BUILD_PACKAGES="wget build-base php7-dev" && \ apk add --no-cache --virtual .php-build-dependencies $BUILD_PACKAGES && \ apk add --no-cache --repository https://dl-3.alpinelinux.org/alpine/edge/testing/ gnu-libiconv-dev && \ (mv /usr/bin/gnu-iconv /usr/bin/iconv; mv /usr/include/gnu-libiconv/*.h /usr/include; rm -rf /usr/include/gnu-libiconv) && \ mkdir -p /opt && \ cd /opt && \ wget https://secure.php.net/distributions/php-$PHP_VER.tar.gz && \ tar xzf php-$PHP_VER.tar.gz && \ cd php-$PHP_VER/ext/iconv && \ phpize && \ ./configure --with-iconv=/usr && \ make && \ make install && \ mkdir -p /etc/php7/conf.d && \ #next command not needed in LSIO Docker #echo "extension=iconv.so" >> /etc/php7/conf.d/iconv.ini && \ apk del .php-build-dependencies && \ rm -rf /opt/*
    1 point
  30. Yip! Spot on! And I came here from Linus' two gamers one pc video....and hooked since then! Had to re-do my unRaid server once because I just dived in and did not really follow best practice for a few things...but once that painfull part was done, it has been smooth sailing ever since.....by following mostly SpaceInvaderOne's vids 🙂
    1 point
  31. Hi there, So if you create a new VM from scratch, new virtual disk, and install Windows, does that not work at all? If you changed the motherboard/CPU, your PCI devices may have new IDs and therefore you may need to reassign them to your VMs by editing the VM and then saving.
    1 point
  32. Yes, you can use a random usb key as an array drive in your situation.
    1 point
  33. 1- yes, but you need at least 1 disk for the array 2- yes, it's exactly how I moved from WHS 2011 (which I still have running now in a VM on my Unraid server, still great for backing up Windows clients)
    1 point
  34. I had same issue and had to rollback to 4.2.5 to fix it. This issue is well documented at reddit. I ended up adding "tag" to the docker "repository" field. Edit the docker container repository field to this: linuxserver/qbittorrent:14.2.5.99202004250119-7015-2c65b79ubuntu18.04.1-ls93 Hope this helps...
    1 point
  35. I could totally see use cases for this... A slave unraid with ssd on say a pi that could be used to access running VMS... Or a resource rich, power hungry monster a nice, quiet unraid master could WOL if it needed the CPU/GPU... Even a deep storage system, so old shares get moved to archive unraid machine that the master can wake to access the storage pool, so it can spin up an entire server 😂
    1 point
  36. Well, I spun up PostgreSQL 13 and created a databse and executed the following command inside the Nextcloud container. occ db:convert-type --port 5432 --all-apps --clear-schema pgsql nextcloud 10.0.0.10 nextcloud It took about 2 hours to convert my MariaDB to PostgreSQL and when it finally finished, Nextcloud seems slightly faster, by a small margin, but not by a ton like I had hoped. Is there anything else that can be done to speed up this container? I'm accessing it via SWAG with pretty basic/default settings, and haven't installed too many third party apps so far, only really using it for file shares for myself and 2 smaller users. Thanks in advance again.
    1 point
  37. Good example. "Exploitation of this bug has not been seen in the wild." https://www.samba.org/samba/security/CVE-2017-2619.html This is a vulnerability that we will get to but does rise to the level of "drop everything you're doing right now and push this fix out". Each new release requires shutting down and rebooting, so we are not going to generate releases every time some random CVE shows up. There has to be a balance where reason and logic is employed.
    1 point
  38. With all due respect man, this is unwarranted. We take security very seriously. Case in point: totally dropped development last month to incorporate CSRF protection as fast as possible and it was a hell of a lot of work. We are team of 2 developers in unRAID OS core, and one of us can only spend half time at it because of other hats that must be worn. Reality is 99% of CVE's do not affect unRAID directly. Many are in components we don't use. Many apply only to internet-facing servers. We have always advised, unRAID OS should not be directly attached to the internet. The day is coming when we can lift that caveat but for now VM's can certainly serve that role if you know what you are doing. If you find a truly egregious security vulnerability in unRAID we would certainly appreciate an email. We see every one of those, whereas we don't read every single forum post. Send to [email protected]
    1 point
  39. It isn't that unRAID is inherently insecure, just that it isn't hardened. At least that is my impression. The possibility of malware on my Windows pc successfully infecting my unRAID system seems pretty low (knock on wood, I don't like putting that in writing!) Also note that unless the malware was specifically designed to target the unRAID OS and its use of the flash drive, all you would have to do to get rid of the malware is reboot, since the OS is loaded fresh into RAM at each boot. Much higher on the concern level (at least for me) is malware on the pc using SMB credentials stored on the pc to encrypt the data stored on unRAID. This problem isn't unique to unraid, but that's why we're talking about read only shares and the Ransomware Protection plugin. Either way, it is great to bring on people who have such a focus on security. I think limetech has done some really great things in this area recently (the webui now has CSRF tokens) and I'm looking forward to seeing how https is implemented in the next release. If you see problem areas, definitely report them and we can all benefit.
    1 point