Leaderboard

  1. Sissy

    Sissy

    Members


    • Points

      6

    • Content Count

      19


  2. ich777

    ich777

    Community Developer


    • Points

      5

    • Content Count

      2991


  3. Ford Prefect

    Ford Prefect

    Members


    • Points

      4

    • Content Count

      1768


  4. limetech

    limetech

    Administrators


    • Points

      3

    • Content Count

      9707


Popular Content

Showing content with the highest reputation on 12/16/20 in all areas

  1. I'm fine with LimeTech charging subscription fees for any kind of service, whether cloud-based backup, priority access to tech support, or any other service for which they incur ongoing costs. I hope such services are made available to licensees as well as future customers who choose to acquire Unraid through subscriptions. As long as LimeTech continues to offer traditional (non-subscription) Unraid OS licenses that include access to all 'core OS features' and upgrades, I will be rooting for them as they expand their customer base and revenue stream through subscription offering
    6 points
  2. Please try this: Stop array, select each data disk and set its file system type to "xfs". Start array, and then click Format, let it complete and observe that all disks 'mounted', everything looks good. Stop array, select each data disk and set file system type to "xfs-encrypted" (all 3 of them). Start array, observe that all 3 data disks should show 'Unformatted'. Enter your encryption key and click Format. At this point, all should show up as encrypted and mounted. Buf if not, please capture diagnostics at this point and post here.
    2 points
  3. Ich glaub ich werd mal den Hinweis machen, denn nicht jeder braucht den extra Pfad und den kannst ja nicht leer lassen sonst lässt sich der container nicht starten. Freut mich, luckyBackup bietet die möglichkeit über SSH direkt zu syncen und hab auch was eingebaut das noch die Keys für den container erstellt und du dann zu den known hosts hinzufügen kannst (muss noch was ändern aktuell bekommst die nur über das cli).
    2 points
  4. So what is wrong with Unraid charging for cloud based backup. If you want it? Some of you are already paying someone else to to do it.
    2 points
  5. As always, prior to updating, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Besides bug fixing, most of the work in this release is related to upgrading to the Linux 5.9 kernel where, due to kernel API changes, it has become necessary to move device spin-up/down and spin-up group handling out of the md/unraid driver and have it handled entirely in user space. This also let us fix an issue where device spin-up of devices in user-defined pools was executed serially instead of in parallel. We should also now be able
    1 point
  6. Overview: Support for the docker template and container for Overseerr App: Overseerr - https://github.com/sct/overseerr Docker Hub: https://registry.hub.docker.com/r/sctx/overseerr
    1 point
  7. I know I'm asking a lot. Is it possible to get Unraid 6.9 to LTS kernel 5.10 before full release? Only reason I ask is for the comfort of having an LTS kernel from both support as a user and also for the devs. I'm not gonna be upset if I'm told "no". Just thought I would ask and figured an LTS kernel could benefit the UnRaid team in this release . edit: also, thanks for the RC1 release! it seems to be working well so far! thanks for fixing the dashboard issues!
    1 point
  8. Yeah I see my fault. I just asked a friend to try and he said he could join with no issues. I feel quite stupid now. Sorry for the inconvenience. Thank you so much for your work and thanks for your help.
    1 point
  9. For what it's worth, I just updated the Roon CORE running in this docker container to 1.7 (build 710) via the Roon Remote App running on my Mac without any problems. I've updated my template accordingly, so anyone interested in starting over with the correct mappings can delete their Roon container, delete your appdata directory, and pull the container again using the new template. If you backed up your library via Roon you should be able to restore the backup once you have your new CORE up and running. If you'd prefer to just update your existing container to the correct mapping
    1 point
  10. Und ein wenig größeres Gehäuse kommt für dich nicht in Frage? Wenn du dir schon ein "NAS" baust würd ich eher zu sowas greifen: https://www.amazon.de/Eolize-SVD-NC11-4-mini-PC-Gehäuse-System/dp/B005EHRNX6/ref=sr_1_1?__mk_de_DE=ÅMÅŽÕÑ&dchild=1&keywords=eolize&qid=1608151386&sr=8-1 Das ist klein und fein und ein SFF Netzteil is da auch schon drin, sicher ist jetzt nicht günstig aber du hast immerhin 4 Einschübe und kannst wenn es hart auf hart kommt noch eine SSD irgendwo reinlegen/kleben... Das passt zB locker in ein IKEA Kalax rein, glaub da w
    1 point
  11. Thank you! Thank you! Thank you! That did it and I'm all set now!
    1 point
  12. Ok, I've had to escalate to the dev team as I have no idea what's wrong. Will get back to you.
    1 point
  13. God I sure hope so, just seeing it on and lights and fan and seeing it detected didn't make me think that was even a possibility but I'm sure stranger things are possible
    1 point
  14. OH!! GOD!! I think I solved the problem of Missing GOP in GPU passthrough on my PC. I am very excited at this monment, So let me talk about what I did My MB is MSI B450m Motar MAX, it's doesn't need to close CSM, just boot unraid in UEFI mode My GPU is RX5500 XT so i add agdpmod=pikera in the NVRAM boot-args of the config.plist i have add the vbios in the xml config(but i don't think so it's the reason) from TechpowerUP I append video=efifb:off to syslinux.cfg and reboot, it's like: kernel /bzimage append vfio_iommu_type1.allow_unsafe_interrupts=1 pcie_acs_
    1 point
  15. Okay, via process of elimination, it was my windows PC. Then using a network traffic viewer, it was my browser. Even though I had no tabs open. But... I had an extension that was apparently "incorrectly" hitting my server. I disabled the extension, and no more messages in the log. Thank you for the help!
    1 point
  16. Please don't post about the same problem in multiple places. It makes it impossible to coordinate responses. "Crossposting" has been considered bad form on message boards since before the World Wide Web.
    1 point
  17. The images are really grainy and out of focus Its hard to actually tell what you are talking about.
    1 point
  18. True..and we don't know if it's a problem of the smb kext, or the e1000 driver, or whatever...The only solution should be to contact directly apple and speak at the phone with an expert so that he can see exactly what happens (screen sharing): however, we know what apple thinks about mac os on virtual machines, you can find valuable people willing to help and prepared to solve the issues or others that simply request the log files generated from the apple app, and once they see that mac os runs in an unsupported machine they simply don't bother to solve the issue. It happened to me when I
    1 point
  19. I can report the same problem. More in detail : Unraid 6.9.0-beta35 VM pc-q35-5.1 BigSur 11.1 / OpenCore 0.6.4 e1000-82545em Copying a 9Gb file from my MacBook Pro (Catalina) connected over wifi works fine (at least the few times I tested) . Copying the same file from the same Macbook connect over ethernet 1Gb/s leads to a halt after exactly 134,2Mo and then error -8084 Copying file directly from Unraid shares leads to kernel panic. The difference on copying over wifi or ethernet would be the speed (and even more copying from unraid shar
    1 point
  20. ...schau mal hier: Die Einstellung findest Du bei den Volume mappings im jeweiligen Docker...dort im eintrag dann unter "edit -> Access-Mode"
    1 point
  21. It seems you are in the same boat with me.. With Big Sur I'm experiencing smb issue (hangs or kernel panics), and yes I'm using e1000-82545em; not possible for me to use vmxnet3 because, as you also may experience it partially hangs on uploads/downloads. See here also: This refers to another machine I have (look at the last comments): https://discussions.apple.com/thread/252079374 NFS instead of smb seems to work better in big sur. This is one of the reasons I still didn't update to big sur, Catalina has no issue.
    1 point
  22. Ich nutze diesen Docker auch selbst nicht. Mir kam es nur komisch vor, da es ja ein GUI--Werkzeug anbietet, dass nur ein Volume gemappt wird. Mit /mnt/user liegt dann zwangsläufig bei alleiniger GUI Verwendung Quelle und Ziel auf dem lokalen Array, was dem UseCase Backup etwas widersprechen würde. Mein Vorschlag war, analog so wie es@Alphavil gemacht hat, den Pfad für das Ziel (optional) ins Template aufzunehmen. Da kann dann auch ein externer mount eingetragen werden, den man ja auch mit UD erstellen kann. Aber der Hinweis allein ist sicher auch eine Verbesserung. Danke dass Du Dir die Sa
    1 point
  23. Somethings else, doesn't look like this folder mapping is doing anything? this container has always used /data/transmission-home as its base directory, I have tried to separate the config files from this folder on my array, to my cache drive where all my other docker files are stored, but I have been unsuccessful so far! check and see if anything is in that folder before removing it just incase.....Git posting on the issue - you may have to set another variable for this to me used you can also remove the 1198 port. The new PIA configs don't use it anymore.
    1 point
  24. Anytime!! Almost every problem it seems is related to some kind of variable / port / path mapping issue!! why it changed.....I can't answer that one!! Cheers!!
    1 point
  25. I believe this should be 'true', not '1' ?
    1 point
  26. Hello! Anyone interested in translating Unraid into Korean is welcome! 안녕하세요! 한국어 번역에 관심 있으신 분들은 어느 분이든 참여해주세요! 🙂
    1 point
  27. 😁...in der Beschreibung zu Deinem LuckyBackup Docker ist für die Daten nur ein Standard Pfad in den Variablen /mnt/user/ - also die Shares auf dem Array gemappt: https://hub.docker.com/r/ich777/luckybackup @Alphavil wollte aber eine externe Disk, via unassigned devices anbinden und hat diese letztlich separat gemappt, also nicht unter /mnt/user sondern unter /mnt/disks (s.o.) ...für mich sieht es so aus, als ob das im Template nicht vorgesehen ist bzw. die Beschreibung im Docker nicht aktuell ist?
    1 point
  28. Mitlesen nur wenn ich Zeit hab... Um was gehts hier?
    1 point
  29. ...OK, so geht es dann wohl auch...hätte ich von der Beschreibung der Variablen im Docker her jetzt nicht vermutet. Evtl. ist diese aber auch nicht aktuell? Macht schon Sinn in einem Backup-System die Quell- und Zielpfade separat, eindeutig zu benennen, damit man da nix verwechselt. Dann gib doch dem Entwickler evtl. mal nen Tipp...der liest hier mit und spricht Deutsch.
    1 point
  30. Hab noch Einiges versucht und nun funktioniert es. Das mit dem zusätzlichen Pfad einbinden war mein Problem, man lernt nie aus. Danke für die Hilfe 🙂 P.S.: Jetzt läuft auch mein Backup auf die Platte, coole Docker muss ich sagen 👍
    1 point
  31. Container is now finished and uploaded, it will take a few hours to show up in the CA App.
    1 point
  32. Hello paperless users, unfortunately, paperless hasn't received a lot of updates and bug fixes in the past few month. Even pull requests are not merged for some time now. Though, paperless runs rocks solid and gets the job done! For some time now, there is a well-maintained fork of paperless out there. It's called paperless-ng and I'm happy to announce that paperless-ng is officially available via Unraids community application store (CA store). Let me briefly outline a few improvements over the existing solution: New front end build with Angul
    1 point
  33. Hi @Jagadguru Yes. I use these with great success. See here; https://mediaserver8.blogspot.com/2020/04/unraid-discrete-usb-passthrough-to.html and here; https://mediaserver8.blogspot.com/2020/04/mediaserver-83-bifurcation-edition.html You can even assign cards in each slot to different VMs as they can show up in different IOMMU groups. They work using a technology known as PLX. Essentially, theres a traffic cop chip on them that assigns each slot on the expander board a slice of time on the motherboard slot. In effect, thi
    1 point
  34. I was just watching my syslog being spammed with nginx ... worker process ... exited on signal 6 2-3 times/second, and immediately upon finding and closing four stale Unraid web GUI tabs open across two machines, it stopped. Hope this helps someone.
    1 point
  35. I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed usi
    1 point
  36. I believe I remember reading in the first beta that implemented encryption, that there would potentially be a future feature to encrypt an existing disk while keeping all existing data without the need to format it first. Just wondering if this is still on the “near” future road map? I would really like to encrypt my disks but currently they are pretty filled up and the only semi easy way I am aware of would be to make my only parity drive a data drive to copy all the data off a disk, reformat with encryption the old disk, copy back all the data and repeat for each disk. I would prefer not
    1 point
  37. knew before i even clicked on this thread it would be about a molded molex to sata. The old saying holds true: Molex to sata, lose all your data.
    1 point