Leaderboard

Popular Content

Showing content with the highest reputation on 03/08/24 in all areas

  1. I had to chime in, this hit a nerve. I agree with 1812, they want everything for free. Even downloading movies. They will spend the money for hardware but software they want for free or next to nothing. Take a look at the users that rant the most. They are fairly new members. I doubt they have experienced a hard disk failure. This is where Unraid shines. If they complain about pricing, I doubt they use parity drive(s). I say let them leave and go to an alternative. I chose Unraid 14 years ago. Back then the biggest concern was LT’s risk management since Tom was a one man show. I wanted a system to be expandable, I wanted to use my various sized hard disks. I wanted the disk to spin down and I liked the idea that you could still access a disk by itself. It had to be an unconventional server, Unraid fit the bill. I went with the pro license at that time since it was the only one that covered my hard disk count. I just checked my email invoice from “Tom” and it was on sale for $109 ($10 discount) at that time. I spent more for a UPS. Soon I was maxed out and bought two 1TB drives, then larger drives and survived through the 2TB limit! I have experienced the introduction of Joe L.’s creations; cache_dirs, unMENU and preclear. We endured a number of LT re-locations. Unraid has come along way. Thanks Tom! Sorry, I haven’t been active on this forum lately, I been busy doing other things and frankly, Unraid just works. I have recovered through a number of hard disk failures, parity swaps, array up sizing and array down sizing. All painlessly. BTW, I still have the original flash drive. I didn’t cheap out on that. I’ve recommended and help setup Unraid using the Pro license to lots of people and not one complained about the cost. When my kids finally move out, we will happily pay for the “Lifetime” license no matter what the cost.
    4 points
  2. Nein RAID5 ist ganz anders aufgebaut... Das Unraid Array ist ideal geschaffen für Multi-Media bzw. als Datengrab. Im gegensatz zu RAID5 wird eine HDD als Parität gewählt, die dient zur Wiederherstellungssicherheit, die Daten liegen DIREKT auf den Array-HDDs. Bei RAID5 werden die Daten auf das Array "verteilt". Das hat große Vor-/Nachteile. Als Media-Server hat Unraid große Vorteile: - Es spart mehr Strom und verschleiß, alle HDDs können im Standby schlafen und es schaltet sich nur die HDD ein die man braucht (automatisch). - Du hast dadurch gute Lesegeschwindigkeit und zwar 100% der einzelnen Platte von der gelesen wird, es wird für eine Datei nicht von allen HDDs gelesen. - Du kannst in gegensatz zu RAID5 eine HDD aus Unraid raus nehmen und an den PC anschließen und auf alle Daten der Platte zugreifen was bei RAID5 nicht geht. - Bei einer defekten Platte kannst du das System weiter verwenden, es sind halt die Daten der defekten HDD nicht "sichtbar". - Bei Unraid ist es sehr einfach HDDs zu erweitern, einfach anstecken und loslegen. Bei RAID5 kannst du nicht so einfach eine weiter Platte anstecken... Da kommen wir zu den Nachteil: Das schreiben / bearbeiten von Daten auf das Array ist seeeeehr langsam... Da beim schreiben gleich immer die Parität berechnet wird für die "Wiederherstellungssicherheit". Hier wird eine Cache SSD benötigt/empfohlen (als reiner Plex Server könnte man auch ohne Cache verwenden). Der Cache schiebt in einem Intervall die Daten aufs Array zb Nachts. In deinem Fall sind die Vorteile weit größer als der Nachteil, im typischen gewerblichen Betrieb würde das ganze anders aussehen und ein RAID die Vorteile haben. Wenn das "schreiben" direkt auf den Platten wichtig ist auf grund der "masse" die Täglich geschrieben / geändert werden auf den Platten ein RAID besser. Eine SSD wird umbedingt benötigt, dabei ist es aber egal ob SATA oder NVMe... Wenn dein Mainboard einen M.2 Slot frei hat, ist es ganz klar zu bevorzugen eine NVMe SSD zu kaufen statt einer SATA SSD, und den SATA-Slot für das Array aufzuheben. Das Array kannst du nicht für Docker applikationen / VMs verwerden, für die Apps wird zumindest 1. SSD benötigt. Bedenke eines besonders wenn du dir eine Plex Mediathek aufbaust: Strom sparen mit einem Mini-Board ist schön und gut, wichtig ist aber bei einem Media-Server immer das es einen kleinen Puffer zum erweitern von HDDs gibt. Wenn du weiteren Platz benötigst zb in 2 Jahren und extra neues Mainboard/CPU/Gehäuse etc kaufen musst, was hat das Stromsparen gebracht? Mini-Systeme haben meistens dann den Vorteil wenn du genau weißt was deine Anforderungen sind und du benötigst
    3 points
  3. Wenn dir die Stromkosten egal sind kannst du sicher mit dem Ryzen was basteln, aber wirklich empfehlen wird dir das hier keiner. Zu viel Power Consumption wie gesagt. 10W Isle ist schon eine harte Ansage. Mit Gaming VM und Grafikkarte unmöglich. Entweder sparsamer Server oder Spielserver. Beides zusammen schließt sich aus. Zumindest mit 10W Idle.
    2 points
  4. Das wird zu 100% NICHT passieren. AMD ist generell nicht der Hit im idle Verbrauch und Server Hardware ist nochmal weniger darauf optimiert. Und spätestens mit ner 10G-Karte hat sich das dann sowieso erledigt. Da wirst du ziemlich sicher bei 30+W landen
    2 points
  5. Revering back to 1.29.1 fixed it for me as well. Thanks!
    2 points
  6. After seeing your request to JonathanM, I created my own design which I am happy to share. It does not have any markings or trademarks, but it does have around 350 diamond-shaped ventilation holes for cooling. I've uploaded the .stl files to Thingiverse along with some images, notes, and settings. You can download the files and read more design here: https://www.thingiverse.com/thing:6520361 I hope that folks find it useful.
    2 points
  7. @nraygun I did not have them mounted, but yes, if they were mounted I would unmount them first. I am thinking of using a similar system to you for offline backups once I get this all setup how I want it. Cheers
    1 point
  8. Ist interessant das jeder am Anfang denkt wie Du. Mich nicht ausgeschlossen. Mein dicker Server hat auch einen i3-10100. Da hatte ich noch Ideen mit Gaming VM etc. Dann hab ich gesehen was mich eine dedizierte Grafikkarte im Server im Idle kostet und die Idee aufgegeben (das lief auch alles ganz gut). Der Server ist jetzt mit seinen 16-20W Idle Power Consumption, bei 8HDD und 87TB, eigentlich den ganzen Tag im Suspend to RAM und wird nur via Wake On LAN vom Kodi/Jellyfin client geweckt. Alles andere, HA, Nextcloud und alle Services machte bei mir bis dato der N100 und jetzt bin ich sogar testweise auf einen Fujitsu Esprimo mit i5-6500 der mich am Tag 125Wh kostet (minimalwert) zurück gegangen. Der N100 hat mich 190Wh gekostet. Ich sehe einfach das die Leistung der kleinen vollkommen reicht. Aktuell nutze ich testweise den N100 als Proxmox Server. Das geht auch ganz gut. Auch die Dinger sind nicht schlecht. Setzt wie gesagt Synology bei ihren Hochleveligen Customer NAS immer noch ein. Ich hab bei meinen Eltern immer noch ein J4105 mit 3 SSDs laufen für diverse Services. Mit 5,5-6W Stromaufnahme, was will man mehr. Auch der macht da HA, Adguard, Unbound und paar andere Dienste. Reicht Imme noch dicke. Fazit: Ich hab mich im laufe der Zeit vom Befürworter dicker Systeme zum Befürworter "günstiger" kleinerer System gewandelt. Ich hab einfach gelernt das ich nicht mehr brauche, und das viele andere die ihr System einfach nur hinstellen wollen, ohne jeden tag neue Docker zu installieren, meist auch nicht mehr. Dazu kommt noch. In meinem Büro ist es so schön still... Obwohl da alle Rechner stehen.
    1 point
  9. Hallo EliteGroup, ich komme erst heut dazu Deine Antwort auszuprobieren und möchte mich für meine späte Antwort entschuldigen. Sorry. Du bist mit Deinem Tip 2 komplett auf der richtigen Lösung gewesen, die IP war falsch eingetragen. Ich musst noch die Trusted IP ändern und nun tut alles wieder. Ich bin Dir von Herzen Dankbar und wäre im Leben nicht darauf gekommen. Ich wollte schon den ganzen Container neu Installieren, aber hab wegen den Daten die ich schon rein getan habe gezögert. Sag mir was ich für Dich tun kann um mich Ehrlich zu machen. Danke nochmals und Herzliche Grüße, mikonas 🙂
    1 point
  10. This may be the culprit. It's a postgres database (tensorchord/pgvecto-rs:pg14-v0.2.0). I think it has a memory leak, since the last restart (yestarday) it accomulated 50gb+ of ram. It has 4 databases, but I'm currently only using if for nextcloud. I haven't changed anything and has been working fine for months. I'll keep an eye on it during the weekend. Thanks.
    1 point
  11. Ich hatte mich da gestern auch noch etwas eingelesen und es wird - um an eine korrekt konfigurierte yaml zu kommen - empfohlen diese zuerst durch eine lokale Installation zu erzeugen und dann in den Docker zu kopieren.
    1 point
  12. Sorry, can't help with that, try the container support thread:
    1 point
  13. I've been spooked off of getting 'creative.' The problem is that the first person who tries something will probably see success. But what happens if another person tries the same thing and it turns out that their device provides the same GUID numbers?
    1 point
  14. Sorry, will do Posted the question here (for anyone stumbling over this thread here)
    1 point
  15. Should be fixed now, start the array in normal mode, look for a lost+found folder
    1 point
  16. Run it again without -n, and if it asks for -L use it.
    1 point
  17. Okay Perfekt, dann muss ich noch eine kleine SSD verbauen. Danke dir. Das erklärt das Verhalten.
    1 point
  18. Docker / appdata muss immer auf dem cache liegen. Wenn keinen hast, ist das Verhalten normal. Wie soll die schlafen gehen, wenn dauernd drauf zugegriffen wird?
    1 point
  19. Hab gerade mal "move --help" eingegeben. Und siehe da: Dieses Kommando gibt es. Kannte ich bisher nicht, habe es nicht vermisst, also brauch ich es auch nicht
    1 point
  20. Meinst Du den Midnight Commander (mc)? move sagt mir nichts. Wenn Du den meinst, dann kenn ich dieses Verhalten nicht - und ich arbeite täglich mit dem Teil - gerade wieder. Durch den Linux RAM Cache ändert sich die Geschwindigkeit, aber nie die Prozente. Egal ob mit Unterverzeichnissen oder ohne. Mit Unterverzeichnissen fängt der bei mir nicht an bis er die durch hat. Beim Plex Ordner und seinen Trillionen Dateien stürzt er dann meist ab
    1 point
  21. Possibly, but please note that memtest is only definitive when it finds errors, several accounts of users confirming RAM was the problem without memtest finding anything, when there are multiple RAM sticks it's possible to test with just one, and if still issues try a different one, that will basically rule out bad RAM.
    1 point
  22. Bei mir sind alle Platten immer mit einem 15+ Zeichen Passwort verschlüsselt. Unter andrem deswegen (Garantieaustausch). Kann ich nur jedem empfehlen.
    1 point
  23. Mal abgesehen von der Thematik bezüglich mehrerer Server: ich hab meinen Übeltäter bezüglich der C-States in Unraid - es war der fr24feed-piaware docker container. Wenn ich diesen deaktiviere, geht es bei mir im Pkg bis runter auf C6. Beobachte jetzt mal den standarmäßigen Verbrauch.
    1 point
  24. The combination of macvlan and bridging on eth0 is known to be highly likely to cause the system to crash. If you need to use bridging then you should set the docker networking to use ipvlan instead.
    1 point
  25. It's working! Thank you very much.
    1 point
  26. I replaced my RAM and the issue hasn’t returned.
    1 point
  27. if you look at page 1, post 1, you will read to read the dosumentation which ends up as sample here ... https://github.com/jlesage/docker-handbrake?tab=readme-ov-file#intel-quick-sync-video
    1 point
  28. @ljm42 @JorgeB thanks I got that working now. For the benefit of all people who had the same problem and posted to a bunch of other threads (which have been leinked to this thread now) here is my summary of what I have learnt: to be able to use macvlan driver for docker: disable bridging in network settings; here eth0 and possibly vhost0@eth0 have the IP address of the unraid host to be able to use ipvlan driver for docker: enable bridging in network settings; here eth0 has no IP address, but is connected to br0 which has the IP address of the unraid host to avoid vhost0@eth0 to use the same IP as eth0 (will be alarmed by arpwatch, pfSense, TrueNAS, etc.): do no NOT enable BOTH of IPv4 custom network on interface eth0 (optional) (default is ON) and Host access to custom networks (default is OFF) I am currently using these settings successfully: in this setting eth0-eth3 have no IP address but are connected to the bridge br0, which has the IP address of the unraid host network settings: Bridging: yes docker settings: Docker custom network type: ipvlan; Host access to custom networks: disabled; IPv4 custom network on interface eth0 (optional): enabled
    1 point
  29. Vielen Dank für eure Hilfe. Es waren die beschriebenen Schritte. Es hatte allerdings erst funktioniert, als ich den Pool gelöscht und anschließend neu angelegt habe. Gerne werde ich schon bald mein nächstes Problem haben… Olaf PS: Doppelpost kommt nicht mehr vor, hatte nicht die Moderation gesehen.
    1 point
  30. Beachte das Du bei dem Asrock N100m noch einen PCIe SATA Controller (ASM1166 und nix anderes) für die 3. Festplatte brauchst. Das Board hat nur 1x NVME und 2 x SATA onboard. Generell besser wenige grosse Platten als viele kleine. 2x14 TB kosten nicht Die Welt. Achso: Unraid heisst Unraid weil es eben kein Raid ist bzw. unterstützt.
    1 point
  31. Hiya JorgeB Number of files: 411,199 (reg: 380,126, dir: 31,073) Number of created files: 1,927 (reg: 1,866, dir: 61) Number of deleted files: 10,757 (reg: 8,620, dir: 2,137) Number of regular files transferred: 379,786 Total file size: 22.88T bytes Total transferred file size: 22.76T bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 524.23K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 13.20M Total bytes received: 2.21M sent 13.20M bytes received 2.21M bytes 125.81K bytes/sec total size is 22.88T speedup is 1,484,964.81 (DRY RUN) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1330) [sender=3.2.3] Script Finished Mar 07, 2024 19:39.59 Full logs for this script are available at /tmp/user.scripts/tmpScripts/__rsync-delete-test/log.txt I can see that there's a LOT of files it's trying to delete. I will go through the list it's generated & see if they're OK to delete & then let it go ahead & do it's thing. Thanks for the nudge - looks like the right direction. I'll take a couple of days at least to go through the list & then report back. Ta sdd
    1 point
  32. I have no idea how exactly the move went down during the "windows lag" moments, but the "missing" subfolder (and all of its subfolders) straight moved into another (adjacent) subfolder. I'm double-checking to make sure everything is still there, but this is looking more and more like user error solved by find /mnt -iname maneuver. Thank you SO MUCH for steering me in that direction!
    1 point
  33. For someone coming across this, I fixed it by adding "--restart=always" against extra parameters as shown in below. Be sure to turn the "Advances Mode" ON while editing the docker template otherwise this field will not show up.
    1 point
  34. Fixed my issue below by changing to 1.29.1 per https://community.n8n.io/t/not-able-to-login-to-either-docker-or-npm-version-of-n8n-via-local-network/42120/2 Might work for you as well. Looks like there's a bug with the n8n docker that affects Unraid.
    1 point
  35. I had the same problem! Same fix too. Thx for the info.
    1 point
  36. LT is aware of the issues with the USB Creator program. They have been looking for a developer to rewrite the program (I saw the thread here on this forum). Along with flash drives having more capacity as the years have passed, many companies have been adding funky partitions to their drives (for various bloatware things like backup software). A refactoring to a new Creator program will hopefully address this issue. The strength of Unraid is in its configurability. This is what makes it difficult for things such as setup wizards (and on Linux, WiFi) to exist. Hardware and desired configuration can vary greatly from one user to another. Trying to do this would quickly bloat the software (both on flash and in RAM) and force people into a setup that they do not desire. Unraid is more of a home lab, not a toaster. If you need simplicity, buy an appliance such as a Synology NAS, where they control the hardware configuration and the infrastructure of how and what you can install and configure. For Unraid, it is more about the flexibility you have to configure it however you want (and occasionally break things along the way).
    1 point
  37. I believe the most significant challenge Unraid faces in attracting more customers is not a lack of marketing but the complexity of its setup. I ventured into it a few months ago, and just the initial step of obtaining a supported USB key required substantial research. Then, when you acquire a modern, reputable brand key, it's often too large for the setup to proceed, necessitating the use of external applications and manual installation to make it work. Expecting people to use 16GB sticks today is akin to asking them to revert to using floppy disks. Beyond that, there's inadequate settings information in the os; the wikis are somewhat outdated. The main resource becomes scouring through old YouTube videos for guidance. The absence of setup wizards and the high risk of errors, along with limitations like no WiFi connectivity from the host, compound the issue. Although I appreciate the operating system, I cannot recommend it to less tech-savvy friends. This, in my opinion, is Unraid's most substantial barrier to increasing revenue, not marketing. Marketing will naturally follow if the software becomes easy to set up and stable. Challenge a typical Windows user to install it with a RAID, a VM, and a Docker container without making mistakes or needing to search the internet for help. Achieving that level of user-friendliness and revenue will follow.
    1 point
  38. Like you, I'm happy with the "upgrade because it's there" philosophy, especially when it's a product I rely on and trust. The unRAID community, including the forums, SpaceInvader's videos, and Andrew's contributions, are invaluable resources for us "IT dumbass people."
    1 point
  39. I fully understood now, what the issue was. I must be blind However: The fix includes supports now to respect excludes as well as skipping external volumes (if the per-container setting does not wish to backup those). The two new ideas are featured by noirsoldats. For anyone who wants to test the fix on their affected setup: I created a small script that patches the file in question. Its a one-liner and it will do the job for you. Running twice will undo the fix! Open a terminal and paste the WHOLE LINE: curl -s https://raw.githubusercontent.com/Commifreak/unraid-appdata.backup/volume_dup_check_fix/vol_dup_fix.sh | bash Please reload the settings page and check if the warnings are now gone or - if shown - correct!
    1 point
  40. I'll try that this week. So far it's been working fine, I just don't want to power it down or reboot since the issue usually happens right after I startup the server.
    1 point
  41. I wanted to post a update about power usage with my CWWK unit. As I mentioned earlier, my unit has the NVME x4 board that splits the unit's one PCIe 3x4 m.2 slot into 4 3x1 m.2 slots. Initially I was commenting about how the power didn't seem to fluctuate much when adding the drives but there is a problem with simply doing that. Drives in the Array can't use trim and as such any flash based drive would suffer overtime significantly. Because of that among other things I was using the drives without parity before yesterday. On Thursday I picked up a third 4tb Crucial P3plus drive. With that most recent purchase all 5 m.2 slots are filled and per a suggestion on this forum I decided to convert the 3 Crucial P3P 4TB drives to ZFS pool so that I could leverage Trim on the drives. Of course all the advanced features included in ZFS help. ZFS has actually worked fairly well, but it has come at a cost. It's CPU needs as somewhat expected are not exactly minor. The good news is that when doing transfers internal from the other two array drives to the ZFS pool I was seeing speeds up to 2.5GB/s not bad for 3 drives that are limited to around 800-900 MB/s each because of running at PCIE3x1. I tried turning compression on and off and that didn't make a difference to throughput and then i also adjusted the memory allocated to ZFS from 4GB to 10GB and no difference was made. I think the performance limits are currently based on the PCIE3x1 bus and cpu The bad news is that it drove CPU way up and that intern drove Power usage up while doing transfer intensive tasks. A low power solution with these MiniPC's/MB may be served better with spinning rust instead of set NVME's. At least then you wouldn't need to have trim to maintain performance and such which kind of makes ZFS a requirement. It does maintain fairly low power usage when the drives are not very busy and just handling regular system tasks. At most with large transfers happening between the 2 array drives to the ZFS Pool (all 5 drives active) my unit was hitting up to around 45 watts. Prior with all drives in the pool and no ZFS the power draw when doing continuous transfer would stay at most around 20 Watts. So just some food for thought about using NVME's for main storage. I think this will but a big kink in the Lincstation N1
    1 point
  42. I’ve run memtest and there were no errors after 8 passes.
    1 point
  43. I appreciate the response regarding the licensing situation. The other concern that I would like some explanation for is the privacy issue related to the new update mechanism. I was finally able to coax some debug logs out of my browser, and I discovered that there's a lot of information being sent with every click of the "Update OS" button: apiVersion caseModel connectPluginVersion description expireTime flashProduct flashVendor guid inIframe keyfile lanIp name osVersion osVersionBranch registered regGuid regExp regTy regUpdatesExpired site state wanFQDN Some of these make sense as part of a license check (guid, keyfile, flash information). Some, though, seem to be quite extraneous: caseModel (does Limetech really need what kind of case my server is in?), LAN IP, hostname, description... none of these are needed to validate a license. The privacy policy (https://unraid.net/policies) says nothing about collecting this kind of information: What is the primary purpose for collecting all of this information? Is the information used for other purposes? If so, what? Is this information stored? If so: Is it stored in identifiable form? How long is it retained?
    1 point
  44. You clearly didn't make the effort of even reading the post you're replying to.
    1 point
  45. If you make this a subscription service you've lost a customer.
    1 point
  46. CWWK just droped a new MB that includes a AMD Ryzen 7840HS Embedded processor. Also has a ton of drive ability with 9 sata connections From a processing ability perspective this is a powerhouse. I wonder how it will do power wise. https://cwwk.net/collections/frontpage/products/cwwk-amd-7735hs-7840hs-8845hs-7940hs-8-bay-9-bay-nas-usb4-40g-rate-8k-display-4-network-2-5g-9-sata-pcie-x16-itx-motherboard
    1 point
  47. Please remove your guide if possible! This will certainly cause issues for users if they upgrade or modify the libvirt.img in any way, instead I would recommend that you install the files which ship with Unraid 6.11.x via this method, please make sure that you first stop all of your VMs in the first place and then execute these commands from a terminal: mkdir -p /tmp/edk2 cd /tmp/edk2 wget -O /tmp/edk2/edk2.txz https://github.com/ich777/edk2-unraid/releases/download/edk2-stable202305/ovmf-stable202305-x86_64-3.txz installpkg /tmp/edk2/edk2.txz rm -rf /tmp/edk2 This will install the edk2 firmware files from 6.11.5 (should be the build from 202305 - build 3). BTW, you can find all the builds from edk2 over here.
    1 point
  48. Hello. My name is Conner. I have OSD – Obsessive Server Disorder. They say the first step is to admit you have a problem. Here is my story. It all started innocent enough. Last year, anticipating a $600 stimulus check, I decided I would build an Unraid server. I had a handful of unused components from a decommissioned PC – a 1st gen Ryzen, 8GB of DRAM, a motherboard, a small NVMe drive. I had packed too many 3GB drives in my small daily driver PC, and it would always be powered on, running my Plex server. Relocating those drives and off-loading that task to a small server seemed to be a reasonable idea at the time. The build went mostly smooth. I only overshot my budget by a small amount. An extra fan here, an internal USB header cable there. The extra money spent to make it clean was worth it to me. I loaded up the media server on the machine. Then I started thinking, “What else can it do?” This is where I went down a rabbit hole of trouble. Found a good deal on some 6TB drives. I bought 3 of them. Future proofing is good, I felt. It was nice to see that extra storage space. The 8GB of DRAM seemed inadequate, as I started installing more Dockers, so added 8GB more. I’m up to 28 Dockers installed, with 22 running all the time. At least another half dozen pinned in CA, to try out in the future. I started with an old GT760 to do some hardware transcoding. But felt it worth upgrading so I could handle NVENC H.265. A Quadro P400 only costs around $100. The power supply I had was very old and less than trustworthy, so a new one was ordered. I found a great deal on a UPS, to prevent those annoying unclean shutdowns from summer thunderstorms. Looking for an offsite backup solution, I again repurposed those 3TB drives I moved, I took those out of the server, and put them in external USB enclosures, to swap and safely keep at work. I ended up buying 4 more drives (two 6TB and two 8TB). The Intel NVMe is small and slow, so now have a 500GB to install as cache in the upcoming weeks. I worry how I’m affecting my family. I have already corrupted my son. He really enjoys being able to request and add media through one of the Dockers, and stream to his (or his girlfriend’s) apartment. The domain name I purchased makes it easier for him, as well as allows me to get around the DNS firewall at work, to access the server. My wife rolls her eyes when another package arrives, with more of my “toys”. But I feel she may be enabling me. I may need to add the Amazon driver to this year’s Christmas list. I was thinking that Limetech may consider creating a sub-forum, where folks like us can help each other through our OSD issues. But I decided that may not be the best idea – it would be like holding an AA meeting down at the local pub. Thank you for letting me share my story.
    1 point
  49. Yes! If you want to share anything with the container you have to keep in mind that you maybe mess up your permissions on that share if you don't created the appropriate user and group and assign the user in the container to this user and group, so I would recommend that you set up a test share to do this. The steps would be: Start up the container and create the directory where the files should be mounted (in this example "/hostshare" with the command "mkdir -p /hostshare") Stop the container Create a share on Unraid (in this example we will use "lxcshare" with the path "/mnt/user/lxcshare") Set the permissions from this share to 777 with "chmod -R 777 /mnt/user/lxcshare" Open up your config file for the container (you'll find the path by clicking on the container name in the first line) Add this line: "lxc.mount.entry = /mnt/user/lxcshare hostshare none rw,bind 0.0" (without double quotes) and save the file Start the container again and navigate to /hostshare Depending on how you set up the container and users you maybe can skip a few steps or have to do other things to make it work, but the above is the most basic test scenario which will work with for almost any configuration.
    1 point
  50. A small variation if you want the key to be not locally present on the system when operational, the key is only needed during startup of the array. In the go file the following is included before starting emhttp. # auto unlock array install -D /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/starting/fetch_key install -D /boot/custom/bin/delete_key /usr/local/emhttp/webGui/event/started/delete_key install -D /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/stopped/fetch_key # start webGUI /usr/local/sbin/emhttp & The above makes use of the built-in event system of unRAID. These events are created: starting : this event is called before the array is started and is used to fetch the key from a remote source started : this event is called after the array is fully operational and is used to delete the key locally. stopped : this event is called after the array is stopped and is used to fetch the key again from a remote source The script "fetch_key" can be any method to obtain the key remotely, e.g. using a mount method or a FTP (wget) method as explained in the video of @gridrunner The script "delete_key" is a simple file to delete the key locally. fetch_key #!/bin/bash if [[ ! -e /root/keyfile ]]; then mkdir -p /unlock mount -t cifs -o user=name,password=password,iocharset=utf8 //192.168.1.99/index /unlock cp -f /unlock/somefile.png /root/keyfile umount /unlock rm -r /unlock fi delete_key #!/bin/bash rm -f /root/keyfile You can start and stop the array as usual, and the key will be automatically fetched each time, provided that the remote service is up and running. The files "fetch_key" and "delete_key" need to be stored on your flash device. I've created the folder /custom/bin to hold my custom scripts, but one is free to choose their own source folder, please update the lines in the go file accordingly.
    1 point