Danny N

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by Danny N

  1. overall i think this change is a good one, as a user its annoying for subsciptions (expecally as it often comes with so much extra drm that often messes up when internet dies or just cause it feels like it) but at the end of the day updates do cost money to make and if there isnt some ongoing cost model then we fall into the trap of plex where they are constantly adding the latest new newsworthy feature while the core product and other housekeeping items get neglected, this being said i feel like like a few things should be a little different: 1. there should be a discount for those with unraid on multiple machines, as for people who follow the 3-2-1 backup rule with 3 servers that are more than 4 drives each (in the way of older smaller drives) are basically having to buy 3 renuals, i feel it would be very fair to say the for these users upgrades are the renual cost of the most expensive teir (so starter or unleshed), with the other servers being something like 10-15% of the normal cost (expecally as its been said the renual pricing per year is half the new licence cost - maybe this should be something tied to storage space rather than drives so its fair to people reusing older hardware and saving it from the landfill, as it looks like theres will be a huge difference between starter and unleshed inc in renual pricing) 2. id also like to see a perpetual licence teir for starter as unraid has uses outside of file storage, (i achally ended up spliting plex to its own mini pc and used unraid for that as a seperate n100 at idle used less power than the GTX1650 at idle that i used in my main server, sufice to say i wouldnt update this server under the new system and would have probably not bothered with unraid entirly for it so that would be a lost sale - i just knew exactly how plex worked with unraid worked and didnt bother with doing it differently) 3. security updates / bugfixes should be free, going from 6.12.0 to 6.12.8 should be free even for thoses whos updates have expired, i get it costs money but these are small bugfix updates that fix bugs (aka mistakes in the code) like with what happened in early 6.12 releses (6.12.0 to 6.12.4 i think it was) with some networking issues this shouldnt require paying as it was a mistake in the network implemtation, it would also just increase the amount of people needing support for known and fixed issues which will take time on tickets ect overall this does make me wonder if unraid should more move to a system of every update is charged for (so 6.12 6.13 6.14 7.0 ect) vs it being a fixed timeframe as this would eliminate people being a little mad about paying for a years worth of updates and delays maining that dont see any meaningful updates in that year (to them) or just missing something as it comes out a week after it expires, theres alot of ways this can be done but overall i feel that subscriptions are a over done thing in software and most companys abuse the extra income that subscriptions can offer, based on unraids track record i dont feel like its the case here but thats what alot of people are going to see and i feel like clear expectations should be made as to what your getting (maybe a guarenteed 2x 6.x updates per term or something if not payment is pause till it comes out?) and the best way to do that is to pay for the feature relese after its made as then no user can complain they didnt know what they was getting Overall i feel this is one of those changes that kinda sucks for the end user but can understand it and really want to recognice that no bait and switch has been pulled by grandfathering all older users in to forever updates and not changing the name or something like calling them upgrades not updates and saying the old licence dont cover upgrades so pay up. also sorry for waffling a little but alot of ideas (that im sure has been considered) but didnt really know how to put it in words
  2. achally i suppose a better idea would be to be able to have a tool to be able to mark a file as being corrupted (like zfs or plugin sees a checksum mismatch) and regenerate the file using the parity data tho im assuming this isnt possible without making your own filesystem
  3. with ZFS support now added then thing that i think this is missing is the abilty to partition drives and use these partitions in the array / pools, this may seem odd but the reason is that i saw on reddit that on freenas its possible to get checksumming and correction to work on a single drive by partitioning the drive and then raidz1ing the partitions to make a single drive have bitrot protection without the huge capacity loss of saving everything twice (copies=2) ofc it will have a huge performace loss but for archival data it might be worth it, is there a way do similar on unraid and then adding this single drive 'pool' to the unraid array to allow it to be protected by the unraid parity system again ik this is very weird but i kinda got unraid for the easy expansion, lack of all drives spinning up and in a way the protection of having drives failing beyond the parity without losing all the data and to combine this with the checksumming / bitrot protections of zfs is a intresting idea (since i have 16TB HDDs i can partition these 8 way so a raidz1 would give me 14tb usable space) and arc / l2arc should help with read speads of commonly used files anyways so maybe the performance wont be too horrible expecally with all writes going to the cache drives at this time im more intreasted wether or not its possible in unraid atm as i think itll be pretty stuid to attempt this kinda thing with live data until the full zfs implementation of zfs on unraid is compleate TLDR i think this boils down to 2 things, 1. can you partition drives in unraid (1b. and then pool these partitions) and b. can you add zfs pools into the unraid array as a 'drive' (this last one could be useful for others if they have a ZFS drive and wanted to add a slog / l2arc / other special vdev) to a parity protected array drive) the second thing that unraid is missing for me is backups, i use rsync to backup my unraid server but being able to do this via gui would be so much nicer
  4. ok seems to be the pcie express link speed - dropping to gen 2 and now had a successfull parity rebuild on 2 drives and then a full praity check without error, this is the first time its done a parity check sucessfully and also didnt finish 2 operations back to back before ether, so gonna say this has nothing to do with my issue EDIT: thanks for the help
  5. this is only about a minute after bootup, hopefully it helps, for now im gonna try dropping the pcie link speed down to gen 2 to see if its the ribbon cables (the sheilded ones) for the hba cards dnas-diagnostics-20210330-1804.zip EDIT: Adding syslog dnas-syslog-20210330-1711.zip
  6. ty, currently doing a memtest for a day so il do this tommow once i get back into unraid
  7. edit to the above: do have this log from after it finished the first rebuild and crashed disabling the parity disk tho it probably dont help much syslog
  8. hi, so plobably not related to this issue but last week i made major changes to my system in order to add a ssd cache and a gpu for a vm for this i added 2 lsi 9207 cards via a asus hyper m.2 expander (due to only having 2 pci slot with 4 lanes or more one of with was already in use) anyways, a little complex but seems to work(ish) but having problems with the lsi cards ether dropping out and then chrashing, this is using 6x ST16000NM001G with 4 data drives and dual parity and this issue has so far caused 4 disks to become disabled, data 2 and 3 on the first party check after the changes, at around 5% completion, the second attempt worked and data 2 and 3 rebuilt successfully at this point i made a full backup and ran another party check to confirm it was running ok and chrash at approx 10% with parity 1 disabled at this point i sawpped the controllers around so now controller 2 had the hdds connected and controller 1 had the ssds ran parity rebuild for parity 1 disk and at around 4% chrash and data 3 again disabled, unforchanelty id only enabled logging on the second chrash and didnt realise these was only saved on ram so have no logs im aware that ST16000NM001G is a not a ironwolf but read up that these are very similar to their 16tb ironwolf drives so maybe affected, i origianlly thought this was due to bent pins on the cpu with happened during this rebuild where i dropped the cpu after it attaching to the underside of the cooler and crushed it with the case while tiring to catch it, this affected 8 pins compleatly flattening them but according to the diagram on wiki chip these are for memory channel A and GND (pin 2 from the corner broke but this is only power) the cpu ran happly during stress test and is currently 2 hours through a mem test with 0 errors, so if this isnt the issue then i can only assume it to be the signal intrerty between the cpu and the 9207's which ill test by dropping the link speed down to gen 2 and hope this dont affect my 10gb nic full system spec before DATA: ST16000NM001G x6 cache- none vm data - samsung 860 1tb via unassigned drives docker data - sandisk 3d ultra 960gb via unassigned drives these was connected via mobo ports and via a cheap sata card i had lying around in pciex1_1 GPU: 1660super for plex (in pciex16_1) CPU: 3950X mobo: asus b550-m ram: 64gb corasir vengence (non ecc) @3600mhz psu; corsair 850W RMX case: fracal design node 804 with APC UPS 700VA damaged pin details: according to wiki chip (link to pic ) damaged pins was C39 - K39 (C39 - K38 fully flattened) and AP1 to AU1 was slightly bent but these, after repair B39 fell off as it was not only flattened but had achally folded in half and A39, C39 E39 and J39 still had a thin section on the top part of the pin right where it was bent, systems booted and passed CPU stress test ect, (didnt consider doing a mem test at this time) full system spec after DATA: ST16000NM001G x6 cache- 2x MX500 2tb vm data - 2x samsung 860 1tb via pools docker data - sandisk 3d ultra 960gb and samsung 860 1tb via pools these are via 2x lsi 9207 in slots pciex16_1 via hpyer m.2 slot 2 and 3 with the HDDs in one card and the SDD's in the other card) NIC: asus XG-C100C (in pciex16_1 via hpyer m.2 slot 4) GPU: 1660super for plex (in pciex16_1 via hpyer m.2 slot 1) GPU2: RX570 (intended for win 10 vm currently unsued in pciex16_2) CPU: 3950X (nowwith bent and missing pins) ram: 64gb corasir vengence (non ecc) @3600mhz mobo: asus b550-m psu; corsair 850W RMX with APC UPS 700VA case: fracal design node 804 (yeah its very tight build) ill update if i find the issue (or get logs of it now i have those set up) but slim chance its related (still got at least 22 hours of mem test to go tho) sorry for the long comment but more detail hopfully helps