Danny N

Members
  • Posts

    14
  • Joined

  • Last visited

Posts posted by Danny N

  1. On 3/24/2024 at 11:19 AM, Jehoshua said:

    Hello

    Basically, it's good that Unraid is thinking about financing. 

    But it is extremely disappointing that Unraid still does not differentiate between commercial and private licenses.

    And now, it get even worse: Unraid only allows rich users or commercial users to buy a lifetime license.

     

    I know many families for whom 30$ is a lot of money - even in 'rich' Germany!

    That's why I have a private NAS with 12 disks that I make available to friends and neighbors free of charge.
    Professionally, I am responsible for IT in a company with around 180 employees, so I can judge the business and private benefits very well 🙂
     

    Therefore, such a pricing model would be much fairer:

    1. Starter, up to 6 attached storage Devices
      • Private use:
        Idea: Private user always have the Unraid logo
        Buy: 39$
        Annual fee: 19$
        Lifetime: 99$ 
         
      • Commercial use:
        Idea: Commercial users can change the Unraid logo
        Buy: 69$
        Annual fee: 49$
        Lifetime: 219$ 
         
    2. Unleashed - unlimited number of devices
      • Private use:
        Idea: Private user always have the Unraid logo
        Buy: 79$
        Annual fee: 19$
        Lifetime: 149$ 
         
      • Commercial use:
        Idea: Commercial users can change the Unraid logo
        Buy: 159$
        Annual fee: 49$
        Lifetime: 299$ 

     

    An important note:

    • In 50 years I have met a lot of companies and 99% are fair when it comes to licensing.
    • Therefore, I would trust the self-declaration of the users: if someone orders a private license, then you can assume this is no abuse of the license.
    • However, it would be important to be able to convert the license from private use to commercial use. There are many people who start out in a garage with a small private project that develops into a business over time - and then it should be easy to change the license 🙂

     

    Thanks a lot, kind regards,
    Thomas Schittli

    Ngl I would agree with the concept of lifetime starter but I do think the lifetime prices probably need to be a little higher I also feel that there needs to be more of a gradient as someone with lets say 4 disks is probably gonna be a lot more price sensitive than someone with 16 and I feel like to older licence system worked well with that now it’s kinda one monthly price (which i get is to not penalise those who buy a higher teir than they need) but it means the lower users are charged way higher than they need to really affecting their cost benefit analasis, if I know the cost of entry is £100 I’m gonna wanna make sure it’s for me, which £10 I’ll probably just try it, rn to me these starter users has a high cost of entry with a high cost of renuing and I feel a lot will just go to truenas or off the shelf nas, with the current pricing I don’t feel I would have ever started even if now ik it to be one of the best buys I’ve made. I almost wonder if they should just go with the monthly cost as updates are kinda important for a nas, and the buy cost just artificially increases the cost of entry, maybe do

    starter

       2 drives

       $20 buy / a year 

       $100 lifetime

        This will almost be a long term trial and let people use unraid as a ui for a docker or vm or something (I’ve done this for Plex for intel quicksync hardware transcode as my main nas it’s amd based)

    Basic

        6 drives 

        $30 buy / a year

        $150 lifetime 

    plus 

        12 drives 

        $45 buy / a year (or maybe do $5 monthly but contracted for a year)

        $225 lifetime

    unlimited:

        Unlimited drives

        $60 buy / a year ($7 monthly but contracted for a year)

        $300 lifetime

    for commercial use add $10 a year or $1 montly and allow changing logos and removing / allowing to change references to unraid other than in technical documentation / areas 

    I’d also say for people with multiple licences there’s a discount of 50% per additional licence (ofc the if you have unlimited and plus the cheaper teir gets the discount) 

     

    overall I feel that pricing while more complex would allow a low cost to enter while maximising the money earned from those that see the most value from the product as these people will have the most disks, id also say unraid could add secure remote access for lets say $10-20 a month similar to how home assistant works ofc you can set it up yourself but that’s hard to make secure so many will just buy it 

  2. On 3/24/2024 at 9:56 AM, Helmonder said:


    That is a true word….

    However unraid has not been “free” for some time. Alternatives like truenasnor proxmox are there and are free. So there is absolutely a place for users who want a free option.

    Unraid already is not that place.


    Verzonden vanaf mijn iPhone met Tapatalk

    I’ll just say here, that unraid is built for the home user, you are the customer, for truenas / proxmox, the home version is really just the trial for the enterprise version to allow people to refine their skills in the software to where when it comes to a project at work someone will be like ik this software already lets use that, and also it increases the pool of people who can be easily recruited to manage this systems as people trained themselfs for free at home, truenas and promox will not priories features home users want while unraid will, after all there’s a really good saying that if your not the customer your the product and while it’s certainly not as bad as online ads stealing all your data your still the product being used to give truenas / proxmox free training and maybe bug testing as they probably do release less tested builds to the free users to let issues not affect their real customers 

    • Like 1
  3. On 3/22/2024 at 12:16 PM, itimpi said:

    The one thing that surprised me was that the one year extension cost was the same regardless of whether you had the Starter or Unleashed licences since they are very different first year prices.   I would have thought something more like $29 and $39 might have been considered.   Probably a good reason why not but I wondered what it was?

    Ngl I thought the same but it occurred to me that this would stop people with 5 disks from buying unleashed as the cost per year goes up even if they aren’t using the only feature they paid extra for, to me it makes a lot of sence even if it really hammers people who only have 1 or 2 drives, I get why but I would worry this makes those really small use cases quite hard to justify now, I would love to see a starter lifetime one as I’ve honestly used unraid in places just to add a remote ui to some docker container or vm and so had minimal need for drives, ofc under the new licensing this is totally unworkable use case now, as cost per year is just too high for the benefit and lifetime is just dumb is crazy money for it, reallly feel limetech missed on not having some substantial discount for renuing keys on the same account 

    • Like 1
  4. 20 hours ago, isvein said:

    Agree, its confusing how it is written now.

    Also, its great to have options.
    Its a good thing that we both have the unraid-array, btrfs, xfs and zfs options.
    And without the array, unraid would not be unraid, just another TrueNAS "clone" that would sit between core and scale.

    As I try to tell all the TrueNAS "sellers" out there, TrueNAS is great, but me and many does not like or want that we are locked to zfs.
    And surprise surprise, the hard hitting TrueNAS "sellers" wont mention up front that its locked to zfs, basically need identical drives, cant (for now) expand an pool without putting in same amount as drives as existing vdev(s), cant use mixed size drives (unless you make multiply pools).

    And if you try to point this out they most of the time just goes "Buuut muuuh freee!!"

    (also TrueNAS is not truly free, they too have developers they need to pay just like Lime, but since TrueNAS is an (as far as I know) hard hitter in the enterprise market. they also sell their own hardware solutions, there is where they get the money from. If Unraid had been a hard hitter in enterprise marked, Lime had sold their own hardware solutions etc, Im pretty sure Unraid would have been free for private use too.)

    Another huge thing against truenas is a few of the more home features are not the greatest, I’ve been unable to get spin down disks to work for just 1 drive (which is dumb as you need the array to be active to get data so one 1 drive constantly spinning up and down is dumb, this is exactly why I’m gonna move this over to unraid when full zfs support is added in 6.13).

     

    for those that might try to help, firstly ty but I’ve looked into it a lot and have had the unraid licence for this since last year so it’s no cost now, smart was disabled and I’ve rebuilt and reinstalled a few times now and it’s always the first drive added to the pool it also totally dumb as the drive spins up then immediately after spin up shuts down, then spins up a couple seconds later then spin down, the spin down delay in 6s so shouldn’t be doing that at all, it’ll do this for 10 mins and stop, I only use the array for in progress projects and the drives aren’t nas drives (they are my backup drives from pre unraid) so I’m not too fussed if it dies I’ll probably just rebuild the array with 1 less drive tbh 

  5. 10 hours ago, MrCrispy said:

     

    btrfs does everything ZFS does, mostly, in a much more friendly and less resource intensive way, with added features, and more modern. I see no reason to adopt ZFS except being a bigger name and more enterprisy.

     

    about your last point - it doesn't matter if ZFS can expand. btrfs does this already. Both of them stripe data. With unraid i know that I can simply take any drive and it will have all its files in native format, readable outside the array.

    Ngl if Btrfs did what it said it would do reliablely zfs would never have been made or even if it was gain much market share, it’s only because btrfs was (and still is) not amazingly reliable (a must for a file system) in anything but its most simple modes (stripe and mirror) and there being a market demand for something btrfs like that zfs was developed and took over the enterprise section of what btrfs promised sadly nearly many the features of btrfs that appealed to home nas uses haven’t made it into zfs yet (eg. inequal drive sizes)

     

    at this point I think zfs has gained so much traction that even if btrfs became exactly what was promised and stability the same as zfs it wouldn’t see much adoption outside home uses and so the project is doomed to fade into obscurity hopefully one day another fs will take the promise of btrfs and combine this with things enterprise needs like for example much better ssd management and that will gain wider adoption but without enterprise on board there’s much lower funding and dev work committed 

  6. 21 hours ago, Spec7re said:

    I think there was a vote on this forum from Lime Tech (a while a go) asking the community what major feature they wanted them work on next and I believe ZFS won, hence them implementing it.

     

    Personally I think it's overkill for home users, but it does opens the door for Unraid to be used in other situations. Unraid's default array is easy to expand (good for home users), but it's not the fastest. ZFS has a ton of features, but I think for a lot of people, it's more about the increased performance than anything else. I've read quite a few posts over various forums/Reddit threads, etc... of individuals who love Unraid, but wanted to use it for something like video editing, but couldn't due to the limited performance of Unraid. ZFS now opens the door for this group of people and others with similar use cases to use Unraid.

     

    Either way Lime Tech isn't removing the default Unraid array, it will still be available for those that want it, but ZFS gives users another option to choose from if it's something they need. Really it's about opening the doors for Unraid to be used is more situations, other than just your typically media server.

    Ironically, ZFS has been working on giving ZFS users the ability to expand the pool/vdev one drive at a time (similar to Unraid) and is due to come out later this year from what I remember. So in a way, you can say that Unraid has (in a way) inspired home users who prefer ZFS to push and ask for this ability in ZFS as well...which is being done. So I guess the next big question is, once ZFS has that ability, does it make Unraid's default array obsolete in a way, especially when you consider the increased performance, etc...?? I don't know the answer to that, but I am sure this question will come up at some point.

    So as far as my understanding goes there’s many key differences between unraids array expansion method and zfs’s

    1. Storage efficacy. Let’s say we have 4 identical drives in a raidz1 or in unraid 1 parity, 3 drives are data and 1 parity so 75% of your space is usable, adding one more data drive you would think increases this to 80% (4 data and 1 parity) but on zfs your locked into the 3 bits of data for every 1 bit of parity thing so it remains at 75% efficient unless you compleatly rebuild 

    2. Drive failure. On zfs if you lose more drives than you have redundancy for you lose all the data on unraid you only lose the data on the failed drives as each drive is it’s own file system.

    One last thing I would like to mention is that zfs’s bitrot protection only works in a raidz or mirrored array and not on a single disk which while I get why this is the case is really dumb as there’s gonna be a lot of people using zfs for a protection they don’t achally have, sadly zfs is a enterprise first fs so home uses is much lower priority 

  7. overall i think this change is a good one, as a user its annoying for subsciptions (expecally as it often comes with so much extra drm that often messes up when internet dies or just cause it feels like it) but at the end of the day updates do cost money to make and if there isnt some ongoing cost model then we fall into the trap of plex where they are constantly adding the latest new newsworthy feature while the core product and other housekeeping items get neglected, this being said i feel like like a few things should be a little different:
    1. there should be a discount for those with unraid on multiple machines, as for people who follow the 3-2-1 backup rule with 3 servers that are more than 4 drives each (in the way of older smaller drives) are basically having to buy 3 renuals, i feel it would be very fair to say the for these users upgrades are the renual cost of the most expensive teir (so starter or unleshed), with the other servers being something like 10-15% of the normal cost (expecally as its been said the renual pricing per year is half the new licence cost - maybe this should be something tied to storage space rather than drives so its fair to people reusing older hardware and saving it from the landfill, as it looks like theres will be a huge difference between starter and unleshed inc in renual pricing)

    2. id also like to see a perpetual licence teir for starter as unraid has uses outside of file storage, (i achally ended up spliting plex to its own mini pc and used unraid for that as a seperate n100 at idle used less power than the GTX1650 at idle that i used in my main server, sufice to say i wouldnt update this server under the new system and would have probably not bothered with unraid entirly for it so that would be a lost sale - i just knew exactly how plex worked with unraid worked and didnt bother with doing it differently)
    3. security updates / bugfixes should be free, going from 6.12.0 to 6.12.8 should be free even for thoses whos updates have expired, i get it costs money but these are small bugfix updates that fix bugs (aka mistakes in the code) like with what happened in early 6.12 releses (6.12.0 to 6.12.4 i think it was) with some networking issues this shouldnt require paying as it was a mistake in the network implemtation, it would also just increase the amount of people needing support for known and fixed issues which will take time on tickets ect overall this does make me wonder if unraid should more move to a system of every update is charged for (so 6.12 6.13 6.14 7.0 ect) vs it being a fixed timeframe as this would eliminate people being a little mad about paying for a years worth of updates and delays maining that dont see any meaningful updates in that year (to them) or just missing something as it comes out a week after it expires, theres alot of ways this can be done but overall i feel that subscriptions are a over done thing in software and most companys abuse the extra income that subscriptions can offer, based on unraids track record i dont feel like its the case here but thats what alot of people are going to see and i feel like clear expectations should be made as to what your getting (maybe a guarenteed 2x 6.x updates per term or something if not payment is pause till it comes out?) and the best way to do that is to pay for the feature relese after its made as then no user can complain they didnt know what they was getting

    Overall i feel this is one of those changes that kinda sucks for the end user but can understand it and really want to recognice that no bait and switch has been pulled by grandfathering all older users in to forever updates and not changing the name or something like calling them upgrades not updates and saying the old licence dont cover upgrades so pay up.
    also sorry for waffling a little but alot of ideas (that im sure has been considered) but didnt really know how to put it in words

    • Upvote 1
  8. Just now, Danny N said:

    with ZFS support now added then thing that i think this is missing is the abilty to partition drives and use these partitions in the array / pools, this may seem odd but the reason is that i saw on reddit that on freenas its possible to get checksumming and correction to work on a single drive by partitioning the drive and then raidz1ing the partitions to make a single drive have bitrot protection without the huge capacity loss of saving everything twice (copies=2) ofc it will have a huge performace loss but for archival data it might be worth it, is there a way do similar on unraid and then adding this single drive 'pool' to the unraid array to allow it to be protected by the unraid parity system
    again ik this is very weird but i kinda got unraid for the easy expansion, lack of all drives spinning up and in a way the protection of having drives failing beyond the parity without losing all the data and to combine this with the checksumming / bitrot protections of zfs is a intresting idea (since i have 16TB HDDs i can partition these 8 way so a raidz1 would give me 14tb usable space) and arc / l2arc should help with read speads of commonly used files anyways so maybe the performance wont be too horrible expecally with all writes going to the cache drives at this time im more intreasted wether or not its possible in unraid atm as i think itll be pretty stuid to attempt this kinda thing with live data until the full zfs implementation of zfs on unraid is compleate
     

    TLDR i think this boils down to 2 things, 1. can you partition drives in unraid (1b. and then pool these partitions) and b. can you add zfs pools into the unraid array as a 'drive' (this last one could be useful for others if they have a ZFS drive and wanted to add a slog / l2arc / other special vdev) to a parity protected array drive)

    the second thing that unraid is missing for me is backups, i use rsync to backup my unraid server but being able to do this via gui would be so much nicer

    achally i suppose a better idea would be to be able to have a tool to be able to mark a file as being corrupted (like zfs or plugin sees a checksum mismatch) and regenerate the file using the parity data tho im assuming this isnt possible without making your own filesystem

  9. with ZFS support now added then thing that i think this is missing is the abilty to partition drives and use these partitions in the array / pools, this may seem odd but the reason is that i saw on reddit that on freenas its possible to get checksumming and correction to work on a single drive by partitioning the drive and then raidz1ing the partitions to make a single drive have bitrot protection without the huge capacity loss of saving everything twice (copies=2) ofc it will have a huge performace loss but for archival data it might be worth it, is there a way do similar on unraid and then adding this single drive 'pool' to the unraid array to allow it to be protected by the unraid parity system
    again ik this is very weird but i kinda got unraid for the easy expansion, lack of all drives spinning up and in a way the protection of having drives failing beyond the parity without losing all the data and to combine this with the checksumming / bitrot protections of zfs is a intresting idea (since i have 16TB HDDs i can partition these 8 way so a raidz1 would give me 14tb usable space) and arc / l2arc should help with read speads of commonly used files anyways so maybe the performance wont be too horrible expecally with all writes going to the cache drives at this time im more intreasted wether or not its possible in unraid atm as i think itll be pretty stuid to attempt this kinda thing with live data until the full zfs implementation of zfs on unraid is compleate
     

    TLDR i think this boils down to 2 things, 1. can you partition drives in unraid (1b. and then pool these partitions) and b. can you add zfs pools into the unraid array as a 'drive' (this last one could be useful for others if they have a ZFS drive and wanted to add a slog / l2arc / other special vdev) to a parity protected array drive)

    the second thing that unraid is missing for me is backups, i use rsync to backup my unraid server but being able to do this via gui would be so much nicer

  10. On 3/30/2021 at 6:06 PM, Danny N said:

    this is only about a minute after bootup, hopefully it helps, for now im gonna try dropping the pcie link speed down to gen 2 to see if its the ribbon cables (the sheilded ones) for the hba cards

    dnas-diagnostics-20210330-1804.zip 127.87 kB · 0 downloads

    EDIT: Adding syslog

    dnas-syslog-20210330-1711.zip 25.79 kB · 0 downloads

    ok seems to be the pcie express link speed - dropping to gen 2 and now had a successfull parity rebuild on 2 drives and then a full praity check without error, this is the first time its done a parity check sucessfully and also didnt finish 2 operations back to back before ether, so gonna say this has nothing to do with my issue 
    EDIT: thanks for the help :)

  11. 23 hours ago, trurl said:

    Go to Tools  - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread 

    this is only about a minute after bootup, hopefully it helps, for now im gonna try dropping the pcie link speed down to gen 2 to see if its the ribbon cables (the sheilded ones) for the hba cards

    dnas-diagnostics-20210330-1804.zip

    EDIT: Adding syslog

    dnas-syslog-20210330-1711.zip

  12. hi, so plobably not related to this issue but last week i made major changes to my system in order to add a ssd cache and a gpu for a vm for this i added 2 lsi 9207 cards via a asus hyper m.2 expander (due to only having 2 pci slot with 4 lanes or more one of with was already in use) anyways, a little complex but seems to work(ish) but having problems with the lsi cards ether dropping out and then chrashing, this is using 6x ST16000NM001G with 4 data drives and dual parity and this issue has so far caused 4 disks to become disabled, data 2 and 3 on the first party check after the changes, at around 5% completion, the second attempt worked and data 2 and 3 rebuilt successfully at this point i made a full backup and ran another party check to confirm it was running ok and chrash at approx 10%  with parity 1 disabled at this point i sawpped the controllers around so now controller 2 had the hdds connected and controller 1 had the ssds ran parity rebuild for parity 1 disk and at around 4% chrash and data 3 again disabled, unforchanelty id only enabled logging on the second chrash and didnt realise these was only saved on ram so have no logs 

    im aware that ST16000NM001G is a not a ironwolf but read up that these are very similar to their 16tb ironwolf drives so maybe affected, i origianlly thought this was due to bent pins on the cpu with happened during this rebuild where i dropped the cpu after it attaching to the underside of the cooler and crushed it with the case while tiring to catch it, this affected 8 pins compleatly flattening them but according to the diagram on wiki chip these are for memory channel A and GND (pin 2 from the corner broke but this is only power) the cpu ran happly during stress test and is currently 2 hours through a mem test with 0 errors, so if this isnt the issue then i can only assume it to be the signal intrerty between the cpu and the 9207's which ill test by dropping the link speed down to gen 2 and hope this dont affect my 10gb nic

    full system spec before
    DATA: ST16000NM001G x6
    cache- none
    vm data - samsung 860 1tb via unassigned drives
    docker data - sandisk 3d ultra 960gb via unassigned drives 
    these was connected via mobo ports and via a cheap sata card i had lying around in pciex1_1
    GPU: 1660super for plex (in pciex16_1)
    CPU: 3950X

    mobo: asus b550-m
    ram: 64gb corasir vengence (non ecc) @3600mhz
    psu; corsair 850W RMX

    case: fracal design node 804
    with APC UPS 700VA

    damaged pin details:
    according to wiki chip (link to pic 1600px-OPGA-1331_pinmap.svg.png)

    damaged pins was C39 - K39 (C39 - K38 fully flattened) and AP1 to AU1 was slightly bent but these, after repair B39 fell off as it was not only flattened but had achally folded in half :( and A39, C39 E39 and J39 still had a thin section on the top part of the pin right where it was bent, systems booted and passed CPU stress test ect, (didnt consider doing a mem test at this time)


    full system spec after
    DATA: ST16000NM001G x6
    cache- 2x MX500 2tb 
    vm data - 2x samsung 860 1tb via pools
    docker data - sandisk 3d ultra 960gb and samsung 860 1tb via pools

    these are via 2x lsi 9207 in slots  pciex16_1 via hpyer m.2 slot 2 and 3 with the HDDs in one card and the SDD's in the other card)

    NIC: asus XG-C100C (in pciex16_1 via hpyer m.2 slot 4)
    GPU: 1660super for plex (in pciex16_1 via hpyer m.2 slot 1)
    GPU2: RX570 (intended for win 10 vm currently unsued in pciex16_2)
    CPU: 3950X (nowwith bent and missing pins)
    ram: 64gb corasir vengence (non ecc) @3600mhz

    mobo: asus b550-m
    psu; corsair 850W RMX
    with APC UPS 700VA
    case: fracal design node 804 (yeah its very tight build)

     

    ill update if i find the issue (or get logs of it now i have those set up) but slim chance its related (still got at least 22 hours of mem test to go tho)

    sorry for the long comment but more detail hopfully helps