Jump to content

MAM59

Members
  • Posts

    1,287
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by MAM59

  1. you asked for ZFS, you got it.

    Its bad, at least UNRAID is not the real good host for it (but its bad on others too).

    ZFS is fast as long as there is enough ram free for the ARC cache, but then it crumbles down to drivespeed  / number of disks because it needs to wait for all drives to store data and parity.

    You can simply proove this by rebooting and continuing the transfer later on. I bet it will go up to 500 again for some period, then drop down again for the rest of the time.

     

    You could enlarge that period by editing a kernel variable that limits ZFS's ram usage (search this forum for this, I can't remember, dropped zfs after frustrating results long ago already), stock UNRAID only uses 1/8 of maximum RAM for ZFS.

     

    But beware! runing ZFS without a proper UPS can result in total loss of data! The larger the ARC in RAM is, the more likely is that it wont be synched to disks fast enough if an

    outage happens!!!

     

    Use at your own risk!

     

     

  2. Also das ist eigentlich egal.

     

    Ob Du nun per unassinged Devices einen Mount Punkt selber festlegst, oder einen neuen Pool anlegst und dort die Platte reinpackt, macht keinen Unterschied.

     

    Aber "sauberer" ist es den Pool anzulegen (und somit auch direkt den Mountpunkt festzulegen). Sonst hast Du später auch noch viel Handarbeit beim Anlegen und Freigeben des Shares.

    Wichtig ist eben nur, dass as Primary Storage für den DVR Share nur dieser Pool eingetragen wird und keine Secondary Storage. Dann packt der Mover diesen Kram niemals an.

     

    • Like 1
  3. Keep cool...

    Currently there is a "naming problem".

    "ARRAY" usually means in Unraid that there are some independent Data Disk an one (or two) Parity disks that ensure data integrity and allow restauration if one drive (or two) fails.

    This is NOT RAID5!!! (thanks alot!)

     

    Since the newest release there is now also the zfs filesystem available which allows arrays (the naming problem!) with several different raid levels.

    Some even tried to use a ZFS Array within the UNRAID array (this works but is totally nonsens and slow as a dog! AVOID AVOID AVOID!!!).

     

    To work UNRAID currently needs one UNRAID array (and ONLY ONE!) and a lot of "pools". A ZFS Array (RAIDZ called now) can form such a pool.

     

    The main advantage of the UNRAID array is that you do not need all the same drives to form it up. The only restriction is that the parity drive needs to be the largest one of all all the time (you can only add larger data disks AFTER you have swapped the parity drives to at least that level). And, even if you break up the array someday, the single disks are still accessible and contain all the data. The main disadvantage is that currently there is no read cache. You can add ssds or nvmes as "cache" but they will be only used for writing new files. And after a "mover" has moved the data to the main array and freed the cache again, read and write speed will drop down to drive's speed (writes even slower because parity needs to be kept in sync)

     

    ZFS comes with all the "nice to have" features of modern filesystem including snapshots, a read cache in ram and so on. But it still uses strict RAID which means you have to use the same drives (at least the same size) for ALL disks and there is no way to enlarge them later on. And, of course, if the array fails miserably all data is lost including the files that are on the still healthy drives.

     

    So this is more or less a descion you have to do yourself, flexibility and safety vs speed and "bound to current state".

     

    You could set up an UNRAID array with your 3 22Tb drives (using one as Parity) and a ZFS raid5 pool with the remaining 2Tb drives (but really, I would look at the electricity bill and throw them away and  add another new 22tb one to the main array).

     

    The main clue about UNRAID is that it is NOT RAID!!!

     

    • Upvote 1
  4. Eigentlich versuchst Du den falschen Weg.

     

    Üblicherweise gibt es nur EINEN Share und die Unterordner werden von Windows aus adressiert.

     

    Also Beispiel: Share = Homes (bei Dir "user"), Umleitung(en) per GPO auf Windows: "Documents=\\SERVER\Homes\%USERNAME%" (und dasselbe für Musik, Bilder usw...

    Damit landen die Zuweisungen direkt in den Unterordnern.

    Kann man im Domänenbetrieb zentral einstellen, in Arbeitsgruppen muss man es in jeder Kiste getrennt machen (aber eine REG Datei und ein beherzter Doppelklick reichen auch).

     

    (Dazu noch ein ungefragter Praxistipp: HALTE DICH KURZ! Sowohl Servername, als auch Sharename gehen ja in die maximale Pfadlänge mit ein. Und das User üblicherweise keinerlei Rücksicht auf Dateinamen und Ordnerstrukturen nehmen, kommt es schon mal zur Überschreitung der maximalen Pfadlänge. Bei mir heißt der Server "F" und der Share "H". Nicht hübsch, aber erhöht den WAF ungemein...)

    • Like 1
  5. 8 hours ago, LeroyLaF said:

    nd another M.2, and the network card are using the chipset.

    Thats the culprit! (at least one of them)

    The chipset is only attached to the cpu with 4 lanes and multiplexes them to the m.2 and the lan card. Both only get part of the speed, if one is active, the other one has to wait. (Some can compensate this by having a CPU with 4.0 lanes to the chipset which only offers 3.0 speed to attached devices, but only the manufacturer knows if this is done or if the chipset itself is also only running at 3.0 speed. They won't tell you)

     

    8 hours ago, LeroyLaF said:

    You say >10G only works on certain boards with Xeon or Threadripper, but what makes those CPU's so special?

    These CPUs offer tons of lanes, all slots are connected to them directly and nobody has to wait. You get mobos with 5 to 6 16x slots, fully equipped. But of course, there is a mayor difference in price and power consumption.

    With "normal" hardware there is always a limit, sadly usually deeply hidden in the manuals and behind (*) footnotes.

     

  6. With current (standard) pc hardware its hard to get close to the possible 25G speed.

    There are many limiting factors.

    One of them is PCIe.

    Even with optimistic full use of 3.0, there are only 24G possible. And of course, this is a theoretical value, subtract some 10% for real life.

    Cards with at least 4.0 or better 5.0 will be needed to play it safe. And 6.0 is on its way already...

    But it is normal that Network Speed runs a couple of years ahead.

     

    Speeds >10G currently only work on certain serverboards with Xeon or Threadripper CPUs really fast and fluent.

     

    You may spend some leisure time experimenting with settings and improving throughput a bit, but it is not worth the efford and maybe the next driver kills your setups and you can start over from scratch.

     

    Also, it is quite common, that the motherboards are limited. Not every slot has all lanes connected and also the bios has limitations like "if your turn on SATA5-8 Lanes 3&4 of Slot X will be shut off". So read carefully the f*@cking manuals and watch into which slot you put these cards. Maybe they run only half speed???

     

  7. It happened a while ago, can't remember when exactly, but it worked before.

     

    Drive sdc never spins down anymore, not even if you press the green button in the gui manually. There are no files in use, and I cannot see even a attempt to spin down in the logs (and no hint why it does not happen too)

     

    Usually you will find entries like this:

    pr 27 07:09:57 F emhttpd: spinning up /dev/sdd
    Apr 27 07:09:57 F emhttpd: spinning up /dev/sdf
    Apr 27 07:09:57 F emhttpd: spinning up /dev/sdg
    Apr 27 07:10:10 F emhttpd: read SMART /dev/sdg
    Apr 27 07:10:10 F emhttpd: read SMART /dev/sdd
    Apr 27 07:10:10 F emhttpd: read SMART /dev/sde
    Apr 27 07:10:10 F emhttpd: read SMART /dev/sdf
    Apr 27 07:14:16 F webGUI: Successful login user root from 192.168.0.26
    Apr 27 07:14:53 F kernel: mdcmd (48): set md_num_stripes 1280
    Apr 27 07:14:53 F kernel: mdcmd (49): set md_queue_limit 80
    Apr 27 07:14:53 F kernel: mdcmd (50): set md_sync_limit 5
    Apr 27 07:14:53 F kernel: mdcmd (51): set md_write_method
    Apr 27 07:14:59 F kernel: mdcmd (52): set md_num_stripes 1280
    Apr 27 07:14:59 F kernel: mdcmd (53): set md_queue_limit 80
    Apr 27 07:14:59 F kernel: mdcmd (54): set md_sync_limit 5
    Apr 27 07:14:59 F kernel: mdcmd (55): set md_write_method
    Apr 27 07:15:25 F flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
    Apr 27 07:40:35 F emhttpd: spinning down /dev/sdd
    Apr 27 07:40:35 F emhttpd: spinning down /dev/sdf
    Apr 27 07:40:35 F emhttpd: spinning down /dev/sde
    Apr 27 07:40:35 F emhttpd: spinning down /dev/sdg

    Note that sdc never shows up in the list!!!

    All drives (including sdc) are in the same "wakeup group", but sdc runs and runs and runs..

    The drive only contains movies, the only access I could see was from them folder caching plugin now and then.

    I attach diagnostics, but I doubt there is anything in there (or maybe I have overlooked something?)

     

    f-diagnostics-20240427-0848.zip

  8. 13 minutes ago, jj1987 said:

    Wie kommst du denn auf das schmale Brett?

    Das sind normale Single Devices. Wie sollte da ein RAID5 entstehen - ohne UNRAID Parity?

    hmm, ja, kann sein, die Anzeige ist da nicht so ganz eindeutig. Bei genauerer Betrachtung fehlt bei Cache2 das FS, also gehen wir mal davon aus, dass die 4 Platten dann eins haben. Hab ich wohl falsch gelesen.

     

    Aber dann lieber als ein zfs raidz pool betrieben, als so.

     

  9. 3 hours ago, Cramp4600 said:

    Kann ich noch was optimieren oder umstellen? z.B. zusätzliche SMB Konfigurationen?

    SCHALT DIE UNRAID PARITY AB!!!

    Du brauchst sie eh nicht, ist doppelt gemoppelt, denn das ZFS Array ist RAID-5 und hat eine eigene Parity Verwaltung.

     

  10. ach Gott! wieder mal eins der berüchtigten "ZFS-im-Array" Opfer!

    Du findest hier im Forum reichlich Threads zu dem Thema und vor allen Dingen, warum man es NICHT MACHEN SOLLTE!

    (Kurzversion: ZFS "optimiert", ohne Rücksicht auf UNRAID Array. Deshalb fliegen die Köpfe nur so umher, während UNRAID krampfhaft versucht, die Parität in Sync zu halten. Das Ganze schaukelt sich so auf, bis fast nix mehr geht. Deine 20Mb/s sind schon das Maximum was man aus dieser Konstellation erhoffen kann!)

     

    • Like 2
  11. Paying for updates is nothing bad. You cannot expect any company to work for free for you for decades. They have to pay their employees, the rooms, the hardware...

    This does not work by selling licenses anmore after a time. The market is satuated somedays and improvements are hard to be built in after a while.

    So a wise company thinks about opening "new markets" long before they become bankrupt.

    If "once a year" is too much (I look at Adobe Elements, which are resold every year but the addons are marginally or do not help you at all), "once every major version" will be much less frequent (until they run low on money and just bump up the version number someday...).

     

    You with your old version are in a quite nice position, you will get unlimited updates for the old version, can take a look at them and decide later on, if you need them for the new server too (aka, spend the money AFTER you have reviewed it).

     

  12. this usually means that you have combined more than one lan card into a "bond" (aka port aggragation, aka trunk), set this to something else than "active backup" and connect these cards to either the same switch without configuring the switch to support this trunk style, or connect them to different switches that are also connected to each others somehow (switch loop).

     

    In rare conditions it also can mean that your lan card is broken or switch are broken and reflect packets that should not.

     

    But the first version is much more likely.

     

  13. Read the f*@king Manual! (of the Mobo)

    Most recent Motherboards are "overequipped", they contain more devices/connectors that they can handle at the same time (like "SATA5-8 only work if you do not use PCIe Slot #3" or "NVMe #3 only works if you turn off Wifi" (in BIOS)).

    So be sure to read the special sections and recheck if you did not accidentally created a non working combination.

    Although modern boards look very promising, the limits can drive you crazy. And Intel CPUs are always low on lanes, so it is more likely to run out of them.

     

  14. 1 minute ago, Kilrah said:

    The check is done in the background, you can always leave and come back later

    Nice to know but not helpful too. Usually "checking for updates" is the only daily thing I do on UNRAID. So I go to App Page, wait, and wait, and wait... until I think "this should be enough now".

     

  15. There is one thing that I dont like at CA, maybe someone can fix it someday:

     

    When you open the APP Page, CA scans installed VMs and Dockers for updates and show "update required" (or something alike, currently no update pending :-))) ) on the left menu under the "home" section.

    Thats fine, but also the problem.

     

    Yeah, we know that the background check is running and we know there will be a message if there is something to update.

    BUT, if there is no update, you never are informed that the check is done and you can leave the page!

     

    So you sit there, wait and wait and after a while you give up and do something else...

    Wasted Lifetime :-))) (gets valuable once you come into my region of age 🙂 )

     

    So, my wish would be: just add a message "nothing to do" when the check is done and no updates could be found.

     

  16. 13 hours ago, Failquail said:

    Any suggestions here? Need a SFP+ module on a Mikrotik switch to PCIe slot over cat6 cable setup.

    Get a used Mellanox X3 card (warning! there are different versions with 8x2.0 or 4x3.0 PCIe slot demand! look at your mobo which one will fit. DONT run these cards with less than 4 or 8 lanes!!!).

     

    Microtik has CAT modules for 10G speed in stock, they are rather expensive and, worst of all, they get seriously HOT!. But if you run only one of them, this should work. But keep an eye on the temperature, if it gets too high, the switch turns the port OFF...

    BTW, your CAT6 cable is not the problem, the plugs on both ends are! Usually they are NOT CAT6, therefor they can produce a lot of problems. Also, lenghts >10m can be problematic! Be prepared for Link losses, Line Drops and Port resets... you have been warned.

     

    Better go for fiber or Direct Attach if possible.

     

  17. Patience 😁

    What you see is the difference between gross and net.

    You HAVE a 10G connection, but depending on the files you transfer, you do not reach the upper end.

    Small files will be slow because the overhead needed to adjust the folder content.

    Try to transfer a 1G+ file (a movie for instance).

    Also note that heavy load on UNRAID (like dockers or VMs) can slow down the whole machine.

     

    To be sure that all connections are ok, take a look at the status & error pages of the CRS305. If errors sum up a lot, there is a problem and if there are a lot of timeouts on the stats page, you know that something slows down the net (don't panic if this port connects to the 2.5G switch, it is totally normal that it stops the 10G line quite often, the cheap switches do not handle speed correction themselfs, they put the load onto the upper switch)

     

    Also keep in mind that high speeds only can be achived when writing NEW files to UNRAID only. File on and from the array are limited to real disk speeds.

     

    Better you create a new share ("TEST") directly and only on the NVME and use it for speed tests. This should work in both directions. But do not expect to see 1,1Gb/s, 700-900Mb/s are ok.

     

  18. Du musst das in Relation zu Deiner LAN Geschwindigkeit sehen.

     

    Der Cache wird nur beim Schreiben von NEUEN Dateien benutzt, er sollte möglichst den ankommenden Daten standhalten können.

    Bei einem 1G LAN schafft das eine SATA SSD locker, bei einem 10G LAN kommen auch einfache NVMe SSDs ins Schwitzen.

     

    Für Docker und VMs sind NVMe deutlich besser, denn hier erfolgt der Zugriff ja lokal, ohne LAN Bremse.

     

    Ob man wirklich RAID1 braucht, muss jeder selber wissen. Auf jeden Fall halbiert sich dadurch die Schreibrate in den Cache sofort...

     

     

    • Upvote 1
×
×
  • Create New...