Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 08/30/19 in all areas

  1. 1 point
    ---------------THANK YOU YO EVERYONE FOR ENTERING! The winners have been announced at the end of this thread. --------------- August 29th, 2019 will mark the 14th anniversary of Lime Technology Inc. Much has changed from the early days but one thing has always remained constant: Our awesome community of Unraid users! Seriously, you'd be hard pressed to find a nicer, more helpful online community and we are constantly amazed at the excitement, positivity and all around friendliness of you all. In celebration of all of this, we're giving away 14 limited edition server case badges! See here! To enter to win a case badge, simply comment here. If you'd like, tell us what you enjoy most about your Unraid server, how long you've been a user, or anything else you want to share. It's as simple as that. 14 winners will be selected on August 30th, 2019. See the blog post for full details and pictures! From all of us here at Lime Technology, thank you very much for joining us on this incredible journey. Stay tuned in the coming weeks for more special announcements! https://unraid.net/blog/unraid-14th-birthday-giveaway
  2. 1 point
    August 29th, 2019 marks 14 years of Lime Technology being in business. On August 26, 2005, Tom Mortensen (@limetech), the creator of Unraid, posted the very first introductory post about Unraid and thus began the incredible journey and creation of this amazing community! To date, the Unraid community comprises over 130 countries, an untold number of languages, and thousands of friendly, enthusiastic, and welcoming users! This is the discussion page for the official Unraid 14th Birthday Blog. Feel free to chime in on how you use Unraid, what you thought of the Q&A or anything else that comes to mind! Cheers! https://unraid.net/blog/unraid-14th-birthday-blog
  3. 1 point
    Glad you got it figured out and I would have recommended to use your cache drive but I know some ppl use disk 1. But you should be good to go.
  4. 1 point
    Not LSIO dockers only: pihole/pihole raymondmm/tasmoadmin linuxserver/letsencrypt linuxserver/oscam library/influxdb linuxserver/duckdns homeassistant/home-assistant library/telegraf linuxserver/sonarr linuxserver/transmission
  5. 1 point
    I did check and I still had an empty appdata folder on disk1. I have my app data settings as set to cache only. So I had the folder I actually used under the cache drive and disk 1. I removed the empty folder from disk1 as it was empty anyways. I then changed my allocations to /mnt/cache/appdata instead of /mnt/user/appdata. Then I caught a couple of empty path's that were being added to plex and fixed that. It looked like the docker was trying to force/auto update all my apps in the docker. Plex would stop, remove itself and then try to reinstall with the newest version. I seen in the log it would fail at the /music volume. I double checked the settings on it and it had the movies, tv and music box's added 2 twice to input volumes on. I edited this and fixed it. I then click check for updates on all containers and then when it stated update ready, I then updated all containers. Everything went though and Plex stayed up to do and working, without becoming an Orphan Image. I'm hoping this fixed my issue. I will look into my mover settings. Thanks Harro.
  6. 1 point
    do you have a cache drive ? But your appdata folder references /mnt/user/appdata/Plexmediaserver but your docker allocation is telling me daapd for allocation. For your version use latest instead of docker. if you have a cache drive the appdata would be /mnt/cache/appdata/PlexMediaServer I would remove the docker and install again. see if that will help.
  7. 1 point
    Locking this topic until I can finish reading through it.
  8. 1 point
    Thank you, Xaero. Finally someone who gets it. I will say this in defense in LimeTech. LUKS is relatively new for them, and they plan on adding features in future versions, like allowing multiple keys (something that is already built-in to LUKS, but currently has to be done manually with cryptosetup). That said, I think LimeTech should keep a single password for the whole array. I shouldn't need to enter a different password for every drive in my array--that can get unwieldy. And salting the password with the license key or drive serial doesn't add much security since those are available both inside and outside Unraid. Worst of all, that would make my LUKS drives inaccessible in other LUKS-compatible systems because of some Unraid-proprietary manipulation of my password. That's one of the reasons I like both Unraid and LUKS--if my server crashes or I chose to stop using Unraid, I can plug my drives into virtually any other Linux system and access all my files. What SHOULD happen--and I don't know if Unraid is doing this or not--is the AES key should be randomly generated for each drive. With LUKS, the shorter user password unlocks the huge AES key, and it's that AES key that encrypts/decrypts the drives. Each user password has its own copy of that AES key. If you give someone a single drive from your array with their unique password, they now have access to the AES key. So the security hole is that if the Unraid array uses the same AES key for all your drives, they now have the key to all your drives from that single one that you gave them. Again, I don't know whether LimeTech uses different AES keys for every drive or not. I certain hope each drive has its own unique, randomly-generated AES key. That's getting nit picky and a little off-topic, and frankly I don't care too much about all that right now. I'm more concerned about the blatant security flaw of my password being saved in plaintext to a file. You're absolutely right about everything else you said. I am well aware of all the other flaws you mentioned, but I didn't bother going into to those details because I can't even convince these people that saving the array password in plaintext to a file is BAD BAD BAD. Thus, they clearly won't have a clue or even care about Meltdown, Spectre, or Use After Free vulnerabilities. And frankly, none of those are even necessary since the PASSWORD IS MORONICALLY STORED IN PLAINTEXT TO A FILE and left there for all to see. Seriously, people...WTF? "If you can implement mitigation you should." <<< YES! YES! YES!
  9. 1 point
    I wholely agree that you cannot fix stupid. But you can implement things in a way that take stupidity into consideration. This specific instance is capable of being mitigated by implementation. If you can implement mitigation you should.
  10. 1 point
    That was great! I really enjoyed reading that. I feel like I know Tom and team a little better I hope that these types of QA's happen more often. I'm really enjoying the blog posts you have been putting out.
  11. 1 point
    Hawaiian / Flowered shirts must be the official Limetech dress code. And @jonp is out of uniform
  12. 1 point
    We've not changed anything with our repo's. We've just had someone mention in our discord there could be AWS issues which might explain why the unraid update checker thinks there is an update.
  13. 1 point
    Parity has no idea of files, so you would simply rebuild the same contents again. Do NOT format as that will write an empty file system to the disk and update parity accordingly guaranteeing the disk contents are lost.
  14. 1 point
    Nothing wrong, but it sounded like you didn't use CA. I see no reason to add repositories manually as long as the apps are in CA. CA has a much better interface for installing previous installed apps.
  15. 1 point
    I know it's being looked at, hopefully will be fixed for 6.8
  16. 1 point
    Unraid life is simple 😎 Great you have things working again!
  17. 1 point
    To unlock the LUKS encryption a password or keyfile is used. In both cases the information must be stored in a file under /root to allow Unraid to start and unlock the encryption after the array is started. Once the array is started the file under /root is not needed anymore (until the array is restarted) and may be deleted. The GUI offers this choice. Or you can make use of scripts which automatically delete the file once the array is started. Remember that this file is NOT ACCESSIBLE by any regular users nor remotely accessible. Also it lives in RAM and is NOT PERMANENTLY stored (rebooting the system or power off the system will make the file non existing). In short there is NO SECURITY FLAW.
  18. 1 point
    There is a top folder on your flash device called "EFI" or "EFI-". Rename the folder to EFI if want to use UEFI or rename the folder to EFI- to use legacy. The setting in the "Permit UEFI boot mode" under Main -> Flash device -> Syslinux Configuration will automatically rename this folder accordingly.
  19. 1 point
    I filed a bug report about this:
  20. 1 point
    My question still stands: WHY IS UNRAID STORING MY SECRET PASSWORD IN PLAIN TEXT? If I am not using a keyfile, then why do I have to delete a keyfile every time I start the array? So now I not only have to enter the password, I have to remember to delete the keyfile as well even though I don't use one? And I should add that it needs to be securely deleted by overwriting with random/cryptographic data to prevent data leakage. And how many other Unraid users have their key exposed and don't know about it because they never poked around? LUKS is very secure, but Unraid effectively makes it completely insecure by publishing the password. In fact, I dare to say it's even worse than no security because it gives users a false sense of security, thinking their data is protected and lets them relax on other security measures when in fact their password is actually published in clear text. It's not a hash. It's not salted. It's the password stored in a plain text file with the blatant name "/root/keyfile". It's the equivalent of a password written on a Post-It note and stuck on your monitor. How is this not a big deal to people? This is a HUGE security flaw that basically negates LUKS entirely.
  21. 1 point
    That's good, but I meant replace both cables on each device, sometimes power cable can cause similar issues, but for now see how it goes with just the SATA cables.
  22. 1 point
    It might be worth using something like deOXit on the connectors to improve conductivity. This is one area where one can get degredation over time.
  23. 1 point
    You have what appears to be connections issues on both parity2 and cache devices, recommend replacing both cables on them, then run xfs_repair on the cache device.
  24. 1 point
    In this case all the info needed was on the syslog.txt (but same things could be seen on other files,e.g. lspci.txt and lsscsi.txt): This shows that the Intel SATA controller is set to IDE mode: Aug 28 18:49:01 Tower kernel: ata_piix 0000:00:1f.2: version 2.13 Aug 28 18:49:01 Tower kernel: ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] And this shows that the LSI is an older SAS1 model: Aug 28 18:49:01 Tower kernel: ioc0: LSISAS1064E B1: Capabilities={Initiator}
  25. 1 point
    1st gen LSI controllers only support up to 2.2TB, only SAS2 o SAS3 support larger, you can use the onboard SATA controller, though change it to AHCI mode if available.
  26. 1 point
    Probably your controller only supports drives up to 2.2TB Go to Tools - Diagnostics and attach the complete diagnostics zip file to your next post.
  27. 1 point
    Due to the pervasiveness of CA, bugs (and this is a bug introduced ~ a year ago and previously reported) aren't a particular high priority to fix, especially one which is only a display aberration. That, and for the same reason many repository maintainers cant even be bothered (rightfully so) to publish the URLs for the repositories in the first place
  28. 1 point
    Which may or may not mean it's a good idea to push that version to a production environment. "Stable" unifi software has caused major headaches in the past, I'd much rather wait until it's been running on someone else's system for a while before I trust my multiple sites to it. If wifi goes down, it's a big deal. I'd rather not deal with angry users.
  29. 1 point
    We have this implemented for 6.8 release.
  30. 1 point
    Hey all! I've been using unRaid for almost a year now and stumbled upon a post by congeato on the Home Assistant forums where he documented on how to install Hassio on a VM in unRaid. I decided to make a video about it as it was extremely helpful to me and maybe someone else will appreciate it as well. https://youtu.be/BbzZjnCSEjs My unRaid server has an i7-2600 with 16GB Ram. Hassio performance is amazing! Reboots take less than 10-15 seconds and it's extremely snappy compared to a RPi3. Original link to the forum: https://community.home-assistant.io/t/hassio-on-unraid/59959
  31. 1 point
    How do you get edac-util running on Unraid?