Jump to content

Energen

Members
  • Posts

    516
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Energen

  1. Not against the idea but don't see the real purpose/need for it... Any hardware works, generally speaking, and not everything is universally available so if you have 90% of a list of hardware that's not really available anywhere other than the US, or vice versa or some percentage of mixed availability, the usability of the list can make it more of a problem for someone trying to locate parts from the list because they keep trying to find stuff that's not available. Add on to that anyone who currently adds their existing hardware that's obsolete and not sold anymore you just create a mess of a worthless hardware list. And would you only add hardware that's used by more than one person or would you let anyone add anything? That opens the door to having 500 motherboards, 500 cpus... etc.. that's not a usable list to choose from, even if low ranked things weren't displayed. I think that it could be a good idea to some extent to try and create a small set of specific hardware lists, like 10 specific motherboards that have good features and stability that are recommended and available, instead of every motherboard on the planet that people are using because they had some old hardware laying around. But even with that, what are the criteria? SuperMicro alone has 20-30-40 boards that can be used all with their own individual features/specs. And whatever you might try to do, it should be included as an update / addition to the Unraid wiki, which has already been mentioned before by others needs to be updated. If you want to talk targeted hardware the list of SATA/SAS cards in the Wiki is a good example of something that should be updated for 2020. What's still available, compatible, etc... And then if you really wanted to get creative, with the help of limetech building a feature into Unraid itself (or a plugin that is built) --- the optional, user self-submitted, ability to send back statistics to [limetech] servers about the hardware being used like that is displayed in Dynamix System Info. you could build a hardware database of motherboards/cpus being used automatically and add information that way. But I still don't think it's worth the effort. My personal opinion. There are too many factors and variables to be able to build a list like you're suggesting. I'd be happy to be proved wrong though.
  2. Not sure if you need to do both of these, but global share settings have to allow for disk shares, and then under Shares>Disk Shares make sure the disks are configured for Export: Yes. Disks should show up over SMB like \\TOWER\disk1 You can also enable rootshare in the SMB configuration file from SpaceInvaderOne's video
  3. Unrelated but apparently Plex updated their GUI on my smart tv... and I'm really not liking it.
  4. Did you have a backup of your flash drive? The flash drive is what holds all your array/share info, not the appdata folder.
  5. in your scenario it seems you're only thinking about your data disks.. what if when your server was out of your control your parity drive(s) had data written to them? What if both your data and parity drives were written to? You said you didn't want invalid data written to the parity drive, but if the parity drive were actually invalid and wrote that to the array it would corrupt your files, no? I'm not smart enough to follow along with this single disk parity checking stuff, but it sounds like it's more work than it's worth, and more complicated to implement than any potential benefit it would give. So if you have a system of 10 disks, you're suggesting to run something like 100 different parity checks for each byte because you keep excluding one disk at a time? For what purpose vs how it's done now? I don't understand. Your final assumption is that you have a case of "single disk corruption"... but how do you know unless you finish the parity check fully using both disks.. you could have other errors on the other disk too.. and in any event, which byte would be taken as the correct byte? What if that byte was incorrect, there goes your data.. again. I'm missing the point here, I think... but if it came down to recovering from data modified externally outside of a mounted array, I think any implementation would be purely 'guessing' at which data was the correct one. And you'd want to do a parity check on all your drives, and verify the parity on both your parity disks also, not excluding one because you 'think' you have a single disk corruption.
  6. Energen

    My Intro Post

    Welcome aboard. You will quickly find that Unraid has a lot more to offer than just using it as a NAS and you will have some fun tinkering around with it.
  7. So I'm not an expert at user accounts, but here's what I'll say... it sounds like you want to do exactly what having user accounts is supposed to prevent you from doing. If Mary creates a file, why should John be able to delete it on Mary, and vice versa. It's Mary's file, she doesn't want anyone else to touch it. Why bother with user accounts if you want to be able to delete each others files? That defeats the purpose of the user account in that sense. There's really nothing else to use user accounts for on Unraid.. at least in my opinion.. Except for being able to prevent someone else from deleting your files on your own user share I don't personally see the need for Unraid user accounts.. And if you're somehow trying to "mitigate risk" by using user accounts, once again the ability to delete stuff across accounts defeats the attempt at mitigating the risk. However, with that said .. it seems, through my very quick and basic testing, that your problem seems to mainly be the accessibility from Windows shares? How did you mount the share? On Unraid I created two new users, test1 and test2, a new share called 'test' with 'Secure' security settings, and R/W for both users. I created a file on the share from each user account. I mounted the share on Windows via normal drive mapping, anonymous connection. Since the share's security setting only allows guest read-only access I couldn't do anything to the files. I remounted the share logging in with test1 user credentials and could delete either file. Same for test2 credentials. The owner of the file had no bearing on my ability to delete it from either mapped drive user account. This ability again means that user accounts do not mitigate any kind of risk. So do you have yours mounted in the same way or something different? Off topic, got myself a Canon EOS T6 that I play around with... how bout you?
  8. http://apcupsd.org/manual/manual.html#customizing-event-handling
  9. I pulled linuxmint-20-mate-64bit.iso (1.91GB) from my array to my main SSD drive and it was consistent around 90-95MB/s. Not really sure how that translates on a typical GB network. Writing the iso back to the array was a good 110MB/s consistently.
  10. So I have not perfected my own backup solution yet.. but these are a couple things I've tried and they all work fine enough. Nextcloud --- the overhead to run this might not be worth the effort, I do still currently have this running. Google Photos app (on iOS and Android) to automatically back up your gallery to your google account (this is also running currently) icloudpd Docker https://hub.docker.com/r/boredazfcuk/icloudpd/ I used this very briefly to test the downloading of photos from iCloud. It worked fine, but every so often you have to refresh the cookie to keep downloading, so it's not a "set and forget" solution. It also will not *only* download new photos.. it will download ALL your photos, or only the X-number of new photos. There's no in-between to only download new/missing photos. So in effect, if you download 50 photos -- move those photos from the download location to your photo storage location -- and then download again, you'll have all the same photos downloaded. This can create a nightmare of duplicate photos to have to sort through. GooglePhotosSync docker -- to download from the Google account (used in conjection with the Google Photos app).. I honestly forget if I used this one at all... Essentially no matter what you do it will not be an automatic process. There will be some degree of "problems" due to duplicate photos, etc. I'm still trying to figure out the best process for myself. The main problem is that leaving the photos on the phone necessitates anti-duplication.
  11. So I'm actually having the opposite opinion to that.. . I used to _only_ use Kodi. For years. Sideloaded it on my FireTV sticks and used it from my PC media before ever having even used Unraid (which then turned into using LibreElec VM). And in some sense I still prefer the UI of Kodi compared to Plex, especially when it comes to figuring out watched vs unwatched content, the Kodi UI is just simpler. But I've evolved. And have gotten used to the Plex UI. That was _my_ primary factor in my own Kodi vs Plex debate. At this point, I don't really miss Kodi. It should be noted that I never used Kodi for any streaming plugins, so that could be a huge factor in someone's opinion also. I only used it to stream my local media, and in that sense Plex is perfectly fine. Never really used Emby to any degree, so can't comment on that. Or any other media servers to be honest. Between Kodi and Plex all your bases should be covered. Basically, pick your ecosystem. It's the great Windows vs Mac vs Linux debate, but for media streaming.
  12. Instead of using max protocol, I'm using server min protocol = SMB3_11 client min protocol = SMB3_11 I don't know if it actually causes me any problems or not ... guess I never really bothered to look / investigate... maybe I should try to get the best settings possible but boy that's a headache to speed test everything reliably...
  13. You don't need the template for LibreElec to work, you can run the latest version as a VM. However you need a graphcis card to pass through in order to work. There's only ever been one modified LibreElec version that would work with no GPU, and that's an old version now also.
  14. That's kind of bad advice, I mean the entire point of Docker is to not have multiple VMs, and you don't want to expose the Unraid GUI to the internet, but any internet related Dockers are always exposed, because they have to be. How are you going to run a Nextcloud docker with no external access? That's only one example. To the most direct question that was asked --- from the most extreme standpoint, if any device on your network were able to be compromised, whether it was Nextcloud on a Pi, in a Docker, whatever, the POTENTIAL for complete intrusion is possible. Doesn't matter how many ways you try to separate them. The only way to possibly mitigate complete intrusion is to have each device on it's own separate network, as much as you could. But that's extreme paranoia.
  15. I have not used NVME with Unraid but it "should" be as simple as that. I guess the only problem could be whether or not Unassigned Devices / Unraid will be able to access the OSX file system... only one way to find out!
  16. You can also use the krusader docker for a more visual file manager.
  17. Ah, my mistake, you're one of these cry baby millennials that think everyone owes you something, I apologize for hurting your feelings. You mention about paying for a product, ok so you paid for your IO devices so go back to that company for driver support.. don't take out your frustration on Unraid. Oh wait, you can't, because the hardware is obsolete. Yet you want it to magically be brought back to life just for you. Typical.
  18. So you expect limetech to implement something that isn't even compatible in the first place, is not maintained by the original manufacturers, is not built into the native linux package that unraid runs on, create, develop, and maintain something that would have to be rebuilt every time there was an update? Because 5 people have some obscure, obsolete hardware? I mean, maybe limetech would be interested in doing this... I can't speak for them. But they need to maintain stability for that mass market that you mentioned, not so much for some old stuff that very few people seem to have.
  19. You missed old docker images.. check that too.. And by some chance, you could have a lot of logs piling up too... you can check for log sizes like this du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 | grep .log There's a user script to easily delete docker logs should you need it.
  20. That looks pretty good.. you should upload your script for us! I can't help ya on this because I don't know how to implement the themes, but you might be able to gain some insight by dissecting the current theme plugin and see how it applies stuff on a permanent basis. You can download the plugin file and extract it like a zip file to see how it's put together.
  21. Since nobody explained what reflink was supposed to be/do I guess it could have also been a cool new pizza oven for XFS and Unraid.. it seems that reflink is mostly used for file system block sizes. It might have some effect on overall overhead of the file system but it's not directly for that, it would seem. Might also effect file write speed. https://blogs.oracle.com/linux/xfs-data-block-sharing-reflink And here's another article talking about using it on a ZFS system. https://blogs.oracle.com/solaris/reflink3c-what-is-it-why-do-i-care-and-how-can-i-use-it And another, http://dashohoxha.fs.al/deduplicating-data-with-xfs-and-reflinks/ As far as existing overhead, this is more for what you are talking about:
  22. That's the overhead for XFS file system and has nothing to do with this topic, as far as I understand it.
×
×
  • Create New...