Jump to content

NAS

Moderators
  • Posts

    5,040
  • Joined

  • Last visited

Everything posted by NAS

  1. Why haven't i heard or found this before. HUGE +1 for me, game changer.
  2. If it is for a production but non critical system I would normally say an absolute YES to upgrading to an unRAID RC. However as I understand it the beta call home is still in place and as such should the internet "licensing" server go offline or be uncontactable your production server would fail to boot. Important: I am not 100% sure if this statement is true as I dont think anyone knows exactly what passes to between the systems but it seems to be the logical interpretation of "Your server must have access to the Internet to use the unRAID 6.2 rc." and if so that is a extra factor to take into account. The more RC testers the better but eyes should be open going in.
  3. New wording to cover smaller disks and SSDs: * ReiserFS, essentially read-only usage - 99% * ReiserFS, essentially read-write usage - 93% * XFS/BTRFS, essentially read-only usage - 2% or 20GB free (whichever is smaller) * XFS/BTRFS, essentially read-write usage - 5% or 50GB free (whichever is smaller)
  4. Sometimes I despair over the internet. So much discussion over one line of change log just to help out users who dont read the forum (most). Who cares if its not the 100% right thing to do when 7 words can save bunch of users a headache. oh well, forget I asked
  5. Normally I would agree however not this time. Preclear has been a cornerstone of unRAID community for years now. If an unRAID release breaks something used by so many users should be informed. Its just the right thing to do for the community.
  6. This needs added to the changelog. Most users dont read the forum and certainly not page 35 of a 50 page thread
  7. It's a kernel "maintenance" release, ie, bug fixes only. unRAID-6.2 will stay on kernel-4.4. Excellent, thanks for the clarification.
  8. Bumping the Kernel within a RC cycle is pretty unusual, assume this was an exception to fix something?
  9. First off thanks to everyone for replying. I can confirm that my real world "feel" for RFS slowdowns matches what RobJ summarized above. I see no reason not to base discussion now on RobJ recommendations: as any difference between these and my personal recommendations would be within an error margin anyway. So the first thing that catches my eye is based on the 93% recommendation for ReiserFS in read-write usage versus the 50GB recommendation for XFS just how superior in this respect XFS is. An 8TB drive which formats at approx 7.96TB of usable space, ReiserFS needs 507GB extra reservation per disk. In a full array that is a whopping 11.6TB of extra "wasted" space. One areas we havent touched on is SSD which apart from being a different technology are much smaller and the flat numbers above for XFS could consume a much higher percentage of the disk than intended.
  10. Minimum free space recommendations discussions forked here http://lime-technology.com/forum/index.php?topic=50336.0
  11. Inspired by the following LT staffer personal recommendation: There are few follow on discussions but in general terms everyone seems to agree that: You should never fill up a drive to 100% You should ALWAYS reserve a certain amount of free space The amount of reserved free space depends on the filesystem type The amount of free space reserved depends on the use case (i.e. Predominately RO disks need less reservation that RW disks). What is not agreed upon is: Should the reservation be a percentage, a GB amount or a combination of both. The differences each FS type makes (e.g. anecdotal evidence suggests XFS needs hardly any reservation whereas RFS needs comparitively lots). This is an important topic as people will (and should) try to follow recommendations. More so, with the advent of 8TB + disks a strict adherence to the "10%" reservation recommendation amounts to 18.4TB of unused reserved space on a fully populated array (which is obviously a lot). Google yields lots of other personal recommendations from a the tiny (few MB) to the insane (50%), but no hard facts. The aim of this thread is to agree on a reservation amount recommendations for unRAID users of each supported filesystem type (if different). Ideally this should be a formal statement but failing hard facts a consensus of personal recommendations would best. Thoughts?
  12. garycase, gubbgnutten and RobJ thanks for the replys and I have to say I agree with what you are saying. This is one reason why i quoted and queried the original recommendation to see if we could move this to a firmer non "personal" recommendation. 10% reservation if adhered to could represent two complete drives worth of space on a full populated unRAID server which seems quite high. Equally I havent seen any slow down with XFS and high fill levels but as most here my data on these drives ire relatively static. However in among these assumptions are real uses cases where larger reservation would make a real world difference and I would like to nail that down to a point where a couple of paragraphs in the manual could explain it or even better the GUI could feed back to the user. Should we fork this thread/do me have enough interest to resolve it?
  13. I believe this is a new recommendation? Can you expand on this in the context of RFS and XFS? Happy to start a new thread if needed as it sounds like quite an important consideration.
  14. Lovely to see the CVE fixes and the documentation of such. Clear and precise so appreciated and nice work all. Question, is the call home still in the RC?
  15. Another angle, 33W is a low enough number that the money required to significantly better it would probably vastly outweigh the electricity costs of running as is ... at least in this decade
  16. Only partition is exported. You can extract the disk device using the following code: DISK=$(echo $DEVICE | grep -Po "/dev/sd[a-z]$") Appreciated as always.
  17. Having a brain fart. I have a umount script that Unassigned Devices calls and does a bunch of things like index etc and its great. Totally rely on it for my workflow so a big thanks. However I just went to add a simple `smartctl -a /dev/sds > blah.txt` to it but it seems the partition device is set as a variable i.e. /dev/sds1 but the disk device partition device, e.g /dev/sda is not. Before I go away an pull together a kludge and I missing something obvious?
  18. Official reply from another thread
  19. Brilliant thanks, I completely failed to locate this. Seems I have been shipped a V1 drive even though the advert was for a V2. Given I will be using this for shelf archive storage I am tempted to just keep it. Don't expect I will be the last in this predicament.
  20. I finally have one of these. My google foo is failing, what is the difference between a V2 and a V1 of this drive. The adverts all put V2 in description (Seagate Archive V2 8TB 128MB Cache Hard Drive SATA 6Gb/s 5900RPM - OEM) but I can see no information anywhere on what a V1 drive was or how to identify it.
  21. There is a bridge here between reality and probability. The concept is that because you are not auditing the code you are using you are trusting 100% other people to find, disclose and fix the issues. In general that works really quite well (thank god) IF IF IF you can apply patches. The extra kicker is the more systems you add the more risk you are taking on and on an unRAID system you are betting the house that your VM or docker container contains no exploits that can be abused and if they do that the underlying architecture can keep them contained. Imagine what happens if someone breaks out of your VM. unRAID pays no special attention to keeping exploits contained. It inherits some security from upstream projects like docker etc but as a general rule it is uses a less secure model than pretty much any modern linux distro. This is not a negative point it is by design for convenience and its history of being a LAN only device. Now in this context think of all the security patches that are released that you cannot apply because the appliance model does not let you. Last year a means of getting root was discovered and not made public. I am not even sure it is mentioned in the changelogs. unRAID is just the wrong device for this if you are concerned about security. Saying all that, as with all things, its just a game of risk.
  22. This topic has been moved to Security. [iurl]http://lime-technology.com/forum/index.php?topic=49433.0[/iurl]
  23. General rule of security thumb "Dont put anything on the internet you cannot patch". To add some context the current stable unRAID used docker 1.7.1, have a look at this changelog https://github.com/docker/docker/blob/master/CHANGELOG.md and see the highlights of what has been released in the last 11 months of missed patches. unRAID is simply not designed to be internet facing. Can you do it sure. Will you be safe, maybe. I recommend a VPS for web development.
  24. Yeah i thought i was on to something here but it still doesn't matter since there is no magic delete+trim single command that bypasses the limitation we are seeing. So that is us out of options. We buy SSD that have built in garbage collection (most these days) and check parity more often. If we see anyone finding a gotcha we can post in this thread, confirm the issue and then push upsteam to LT ... but short of that are at the limit of what we can do beyond anecdotal testing. Not all doom and gloom because it fundamentally works, just no one likes using unsupported features.
×
×
  • Create New...