Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About Keexrean

  • Rank
    Advanced Member
  • Birthday 08/24/1992


  • Location

Recent Profile Visitors

466 profile views
  1. Buuuut... what do @Mark A and @jang430 were looking for? Focus on deph and "if possible rackmount" or... Sounds a lot like looking for 12" deep AND rackmount. @Mark A apparently even ended up going for a 4bays rackmout netgear switch that is precisely 12inches deep. It didn't sounded like an option to me. But you do you!
  2. I edited my message to add some info concerning the board and stuff that has to be done, and its capabilities, above an E3 1240 v3. Meanwhile: 1- Or you could read with more attention, they are looking for rackmount, the QNAP TS-851 could be the best example of a consumer-basic-turd box, and doesn't fit what they're looking for. 2- Looking up the TS-x73 series... the TS-873U its 534 mm in deph!! How does that fit in 12inches deep rack mounted to a wall?! Should they ask their neighbor if they don't mind a honking server-butt sticking through the wall into their livingroom? I mean, size was the main challenge they're facing, you can't ignore it like that, or you discovered some pocket-dimension to fit that good half of the server into the void, and in that case, I'm deeply interested and want to buy some, name your price.
  3. 12" is roughly 305mm, while an mini ITX motherboard is a 170x170mm format, a 3.5 drive is ~145mm long, clearly not fitting a standard board x86 in that. Nano-ITX maybe? No, that pretty much disappeared nowadays. I understand why you would go for ARM based. Though, manufacturers like Zeal-All or Axiomtek do produce quite special boards with really small dimensions for not-that-old but still competent intel CPUs of the last 5-8 years. IF you feel like going DIY with a dremel, drill, a cheapo short-depth case from aliexpress and some canibalized 5.25" to 3.5"hotswap cages, you could actually fit 4 bays in a 1U or 8 in a 2U, or, I think the easier route, go the other way around and "shorten" a case that has hotswap bays by welding / jbwelding / riveting its butt back on. And that's a big "IF", depends if you feel like confident/adventurer enough for this kind of projects, but that's something I'ld actually consider if I had a short rack (might do it one day though, as to have storage on the front, and firewall/switchs at the back :D) An example of possible miniboard to setup in such a DIY would be the CAPA13R by Axiomtek that is JUST 104mm x 146mm , features mainly 4 ports 1gbps ethernet, a ryzen embedded chip, a DDR4 soddim slot, and a pair of m.2 slots, keyed B and E. Both of the m.2 slots would be candidates to put a m.2 to PCIe riser, in which to slot an HBA or maybe raidcard, SAS or SATA. That, + a flex power supply, internal, or a Pico-ATX one with external powerbrick, with some lead-cutting and soldering not to just give 12V DC to the board, but also power the riser, as to not power the expansion card from the board through the riser, to not risk to overwhelm the little thing and for stability. These risers usually have a 4pin male onboard connector for supplementary power. Great network connectivity for the format, possibility to have a SATA or SAS HBA or Raidcard, a Ryzen V1807B , which isn't a weak APU at all for its TDP, DDR4... pretty capable! You could technically DIY build banging never-seen short-deph-ed x86 box with hotswap cages. It would be a load of work, but an interesting project nonetheless. Just shooting ideas in case it sparks some interest in your nerdiness!
  4. You would need a GPU to boot with this motherboard (it might be able to do headless, but you would need one if you want to at least configure the BIOS and all.) AMD doesn't integrate graphic in their CPU, unlike intel, unless they explicitly state it in their CPU number, by a G. Like the Ryzen 5 3400G. You will indeed need a GPU, but don't take it as an issue in itself, as I heard you transcode video? With Unraid Nvidia and a green team GPU, you could get some real use out of the caveat of needing a GPU sloted in for the board to post, grabbing the GPU away from the console displaying task and put it to work to transcode. Also, X570 chipset natively supports 3xxx ryzen. B450 is the one needing a BIOS update. Here's the cheat-sheet for you A was the now deprecated "cheapogarbagio"/basic board chipset line, B is the correct-to-nice consumer/gaming oriented one, X is the highend/overclocker/RGBpuke/featurerich line. 3xx boards are 'native' to 1000 series ryzen, 4xx boards are 'native' to 2000 series, 5xx boards to 3000 series, and won't need a BIOS update/swap to support their 'native' generation.
  5. Hi again @saarg , I'ld like to ask you to edit/snip your quote you made of my message earlier in the thread. It contained some info about what email, domain, and IPtracker I use for letsencrypt and all, including docker configs and all. Few days ago I started having heavy traffic on owncloud login page, with a lot of random login attempts. I wouldn't have linked it to this post if, checking the email account used in letsencrypt, it wasn't also being targeted, finding a few fake alerts and fake OVH login page phishing attempts in it. This mail wasn't used to register anything else than letsencrypt, so in two years up to now it collected 0 spam and no mail. If editing the post won't stop the actual attack (which I'm not fearing about it actually succeeding, just annoying), I would like that no one else but that random script kiddie be tempted to give a try at it, so it would be better to just not leave these hints available. I edited my post already, I would appreciate it if you would please snip the full quote you made of it. My post Yours with the quote Thanks!
  6. Well, concerning SanDisk, I did linked a bit long post and didn't really pointed out some important piece of informations, to keep it short. Here are the main points: For the same "Product Name", they discovered that SanDisk sells under the same name (in this post the user Nejko had a SanDisk Ultra Fit 32 GB, and later bought more) at least 3 different product codes (BM141024848V, BM141224848V and BM150224846D), only the first of the 3 actually successfully booting linux, the two others getting stuck early on on a "(initramfs) Unable to find a medium containg a live file system" issue, accross multiple distros. Worse, same user, Nejko, got a hang of a Sandisk Rep through mail, and here's what they got told: You heard that. They don't test their keys to see if they are bootable!! Pretty dismissive, and the kind of BS that doesn't make me want to do any business with them when it comes to USB sticks. At least their SSDs are tested to be able to boot... so far. Later in the post, the user jdb2 went deeper on the issue on their own SanDisk key: That's why I completely ruled out SanDisk when it comes to bootable USB disks. I may be over-engineering the question, but since I'm planning on buying a pack of 10 keys, I don't wanna play lottery with product codes. And yes, your Sandisk USB keys boot fine, I have 1 that does too, and doesn't heat up or anything... but had one that never booted and that I gave to a friend for their university stuff. The user Nejko had one, then went to buy more of the same model "Name", and got done in the rear IO. Given the issues I encountered, that a lot of other people encountered, and SanDisk's apparent policy, if I'm ready to give a chance to any other brand, SanDisk is simply completely ruled out to me.
  7. Thanks for the input. Kingston as I said I had only ONCE a bad experience with (a slow af drive that I gave to my mother, which stored quite important documents on, and basically failed the day of an important meeting just 2 months after purchase!). And given that I only ever had 3 Kingston key, I was just able to give a proper evaluation of their USB keys, nor willing to risk to waste money into potentially defective drives. And with the particular environmental parameters inside a server compared to what most people do with USBkeys, I would have had to risk having servers bootings off of keys I don't trust already? Yeah no. Also, the reason I was questioning about usb 3.0key over 2.0 even on USB2.0 port, was for the simple reason I would think that 2.0 nowadays could be a good reason to cut down on flash chip quality to cut costs for manufacturer. Aka, nice chips going to 3.x keys, shitty slow chips going to 2.0 keys. But if you tell me you have 4 running strong and reliably, and at the back of/inside servers, I may try these!
  8. Yup. That really will be a game changer both for reliability in general, but also for people wanting to run stuff like SAS JBOD boxes.
  9. Yeah! I added a lot of "!?!?!?" is in a memey way, but I saw indeed it was truely confirmed, I was just saying it as for people who might not feel like scrubbing through 1:30H of podcast But what I'm waiting for now is the support for multiple arrays, with each their own parity! That would be FIRE. Like, I have seen people on here running 28+2 drives arrays... the odds of 3 drives dying in such a configuration are concerning, when you account the time it takes and the stress caused by reconstructing of the firsts drives to fail! + having multiple arrays would basically in fact free us from using cache pools with btrfs flukes... I kinda would prefer to run an XFS array of SSD(s) with parity than a btrfs raid for example... or have the cache-pools be able to operate as such.
  10. Hi people! Whether you're running unraid in an refurb enterprise server or in a brand new case boasting a ludicrous amount of DDR4 and an Epyc processor, or have a threadripper or a janktastic set of garbagio parts that assembled themselves through dark magic into a server, there is one thing we, at 99.9% have all in common with unraid: we use an USB stick to boot. And that's where I'm having some concern today. No, I won't be like 'why not use something else', that's not the point. It's more like "what usb drive to use" My unraid server has been running strong for 3 years on the same USB stick. No write error, nothing, everything's great! Or almost. That drive is an idon'tremmemberhowmany years old 8GB JetFlash drive that's missing its plastic housing, wraped in kapton tape. I have an esxi box with a drive from 2007, missing its housing too, wrapped in greasy electrician tape. (I also have a "naked" SSD... I'm a monster 😲) And since I'll be expanding the number of servers in my humble ratsnest of a flat, I decided to replace the old drives from which my boxes are booting, and buy some extra for the new ones. The issue is, it's proven quite hard knowing which drive manufacturer to trust when it comes to this. I already completely ruled out Sandisk. They have been known to produce drives that aren't capable of being boot drive, not following official USB specs. How the f. This thread is quite interresing on the matter, basically linux can't enumerate the drive. Since I'm gonna order like 10 keys, I don't want to have 10 useless tokens of industrial failure. Kingston I had a bad time with once, Transcend several times, but I also owned a lot of transcend keys, and some... well are still holding so far 13years strong. And looking at customer review, basically every key I look up has its fair share of catastrophic failures (like breaking randomely, DoA or stupidly slow speeds, USB3.1 drives running at 1.1 speed). And most comparative reviews are made on sheer speed-and-housinglook&sturdiness-to-price, not durability over years of being powered on. So here's what I'm looking for: Is there some of you who have been running multiple identical pendrives over the years who can give a fair opinion on their speed and, most importantly, reliability? I'm basically looking for 8gig (max 16gigs) pendrives with usb3.0 and up, and just decent enough speeds, that with your experience with that particular drive, you would say it won't mind living next a PSU inside a server, with a raging hot CPU and ram bank puking an heat-wave at it, for years, 24/7. I'm keeping religious backups of my bootdrives, but since I host services for other people, including on opposing time zones, I have no hour acceptable to be a down time longer than a simple reboot (which is long enough already, long POST board.), and I prefer stuff to just WORK and not fail, for some madly inconceivable reason. TLDR: if it's easy for most components, a "server-booting usb drives: almanac of great ones" post is hard to come across.
  11. Yeah... but my server runs 2 websites (including a cloud with some people other than myself needing it to be up to access their shit), and some game servers, and also is where all my workstation stores its shit over a 10gbps link (WS's internal drives are solely SSD for software and scratch). I prefer to stay on stable releases
  12. Multiple cache pools confirmed?!?!? Great!! I'm actually running 2x 1TB SSDs in btrfs raid1, but I didn't wanted to basically "throw them away" for a NVME cache, just have to wait a bit and I'll have both! NICE
  13. Running a bit of geo-ip for funzies from your logs, you got Nigeria, China, India IPs basically battling over the control of your server. That's actually kinda cute. That's actually motivating me to setup an honeypot in the form of a i486 or i386 box with a screen that would just display live-logs 24/7 On a more serious note though, if I was in that situation, yeah, it would be pulling the box's plug, then using a VM with passthrough on an other box to each drive of the array, one by one, analyzing them fully for malware. And obviously, wiping blank unraid's flash drive and then restoring a back up of it / fresh new installation (eventually just power on the server in GUI mode, but with ethernet unplugged, to make notes of some configs you fear not to remember). On a more 'paranoid' point of view, I would eventually also go through the logs to check if any attempt at firmware update of something has been done. Okay, HDD firmware malware is rare, but it's a thing. They are a thing, the worse thing possible, but rare, but a thing. And like half the components of a computer have a firmware. In my case, if I wouldn't worry about the motherboard much for example, raid cards / HBAs are more susceptible targets already, and worse for me, my old iDrac7 and iDrac6 interfaces, which are known to be targets of a firmware exploit. (in that particular case, it's actually hardly troubleshootable/analyzable, so my go at it is simply to stop iDrac from communicating to the outsideworld completly, but able to access it remotely through a VPN tunnel that let me access my home LAN) Though, keep that in head: unraid isn't made to be front facing. You can expose selective ports of docker containers and all, but as long as your router isn't pure garbage, you have, never ever, a single valid reason to put your server on front line to the world wide web. That's a major no-no. The only true way to access securely any admin-related part of an unraid host (so, excluding the services your run from it, like some dockers or VM hosted web-services through selective port exposition or reverse-proxies) is through a VPN tunnel. Never expose SSH, Telnet, and even 80-443 would lead to a pretty hefty number of bruteforce attempts.
  14. And after like... half an hour of uptime? Happened again! Here is the diag for this one. I later used '/etc/rc.d/rc.inet1 restart' , which, without even touching the cable, made everything work again... Scratch that, if restarting the docker containers was enough, restart the VM didn't gave her back access to the network, and stopping libvirt, impossible to start it again after. Had to reboot. Also for information: - eth0 is on the R720's 4 port stock nic, connected to my lan - eth1 is a back to back SFP+ to an other box (main workstation) which was really useful to access the GUI when eth0 failed - eth2 is an SFP+ connected to nothing - eth3-4-5 are the other 3 out of 4 R720's stock nics procyon-diagnostics-20200626-1318.zip