Leaderboard

Popular Content

Showing content with the highest reputation on 06/03/19 in Posts

  1. Problem is, people in general have no clue how parity works in unraid, and even less of a clue that formatted is NOT clear. If this was implemented I guarantee there would be data loss within the first week, when someone put a freshly formatted drive in, told unraid it was clear, and then had a drive failure.
    3 points
  2. Adding a pre-cleared drive adds NOTHING to parity - if correctly pre-cleared then parity does not change. As a result Unraid makes no checks of parity when adding a pre-cleared Disk. If that assumption turns out to be wrong because a disk was not actually pre-cleared correctly then recovery of any failed disk at a later date will also fail. That is high risk which is why Unraid will not take a users word that a disk is pre-cleared.
    2 points
  3. Either I'm a glutton for punishment or just cheap, but I've purchased another Mikrotik switch, the 4 10gbe port (+1 1gig port) little guy. For a refresher, my previous. less than stellar mikrotik experience is detailed here: You don't have to read this reasoning of why I bought this: OK, so the reason I bought this little switch is because my main, old, Cisco/linksys 24 port switch died a pathetic death. I was using the mikrotik CSS326-24G-2S+RM 24 port switch by my servers to bridge 2 10gbe connections between them (one for the main server, the other for a video editing vm on another.) And yes, I know I could do direct cabling (which I've done), but then the other devices on the network trying to access the server would be bottlenecked by a single gigabit connection. When my house switch died, I decided to move the CSS326-24G-2S+RM 24 gig/2 10gbe switch into it's place. I thought about getting Mikrotik's new 8 ports sfp+ switch, but really I didn't need that many ports in that configuration by my servers. So I ended up with the smaller 4 port 10gbe switch which made more sense. Actual review/experience: There isn't a lot of info about this switch out there aside a couple videos, including one review with a guy drinking beer, one with guy with a dry erase board, and some cell phone video of someone else rambling. So I felt like I was winging it a bit but understood what it should be capable of doing. I had the same issues with just getting into the switch as I described in the other review. Rather, I figured I'd have the same issues, so it was basically straightforward this time by following the steps I eventually figured out. Again, the documentation is lacking with the switch and even the website link they give you to get the full documentation doesn't work. The switch comes with dual boot software: RouterOs and SwitchOs, the former being the more powerful/configurable. After changing the IP to a static one on my network and rebooting, everything else was pretty straight forward: plug in the main server, plug in the video editing vm, add a gigabit cable for network access and done. Completely different experience than last time. Important to note: it only comes with 1 power adapter, but they promote the hell out of "dual power inputs for redundancy!" which makes you think you're getting two. You're not. I didn't. So, a bit disappointed about the Latvians having a better grasp on English and implied advertising tactics than me. I talked about the thinness of the metal chassis for the last switch. This one also has a metal chassis, and while it doesn't feel thin, it does feel "less expensive." What I mean is, that I have a Netgear Prosafe GS105 5 port gigabit switch, which feels more rugged, durable, and also weighs more (go figure) than the mikrotik switch. It's not that the mikrotik switch feels cheap, I just thought it would feel, "more." Don't get me wrong, the fit/finish is more than acceptable, as everything seems to be aligned how it should be. I ran some speed tests and found that RouterOS was ever so slightly faster than SwitchOs by maybe 5%... who knows. I don't use or plan to use all the functionality with Router Os (also, I'm glad there is a safe mode), but I may in the future so I'll just let it run. On SMB file share from the ssd cache on the server to the ssd in the video editing vm I was getting 3-400MB/s throughput each way. iPerf3 also showed some nice numbers once I pushed up the thread count (as I had to do with the previous switch to get it's max bandwidth.) This is with no network tuning, right out the box, main server running a mellanox connectx2 into the switch via dac, then out via dac to a solarflare card. Additionally, monitoring the flow in the web gui of the switch showed peaks up to 9.7, but really, anything over 9gbps is just fine with me, and would be fine with you too. SO, I've had this switch for about 4 hours at this point. It seems to do the job nicely. In the next few months, I'll get two sfp+ rj45 10gbe transceivers and make a 10gbe trunk to the main house switch (I put in cat6a about a year ago.) The remaining sfp+ port will be connected to my server that hosts the video ending vm because.... why not, it's there. If my experience changes with this switch, I'll update accordingly. Also, if mikrotik is reading this, and you want to send me a 24 port sfp+ switch with all rj45 10gbe transceivers (or 3 of those 8 port sfp+ switches with transceivers), I'll make it my main house switch and review it for you! ------- March 28, 2020 update So I've been using this constantly for about 11 months. I've dropped it, I've pushed it, I've switched back and forth between the operating systems. It just does what it needs to do. It runs very warm, always has. But it hasn't slowed down. Earlier today I made the decision based on it's location to add a heatsink to the top to help dissipate cooling a little more (ordered one off amazon.) Wasn't my idea as I saw it somewhere else. Does it need it, probably not, but it won't hurt especially since the device has a single heatsink inside that sort of touches the chassis, and then using that to help dissipate heat. I currently use all 4 of the sfp+ ports: 1 to my main server via, 1 to my backup server via dac, 1 to my workstation vm via dac, and one using an rj45 transceiver from fs.com running over cat6a to the 24-gigabit/2-sfp+ switch server as a trunk. That transceiver runs really hot, hotter than the chassis of the switch. But both are always within spec. Regardless, I order a mixed mini-heat sink kit as well for 7 bucks and will add one or two to that transceiver just for fun. Still no world from mikrotik on review sample 24 port sfp+ switch filled with rj45 10gbe transceivers..... but at the rate they are moving products, I doubt it will ever come at this point..
    1 point
  4. You do realise that pre-clear is not really needed any more in terms of the main Unraid system as Unraid no longer takes the array offline while it does a clear. This means that one could easily use the drive manufacturers test tools on any system to do the initial stress test of the disk and then just let Unraid go through its clear step when you actually add it to the Unraid system.
    1 point
  5. Never underestimate the problems users can cause. 🤡
    1 point
  6. You're probably better looking at examples of how it's done. Dockerfile = uncustomised build that will be the same for everyone. Init stage = Where customisation happens, copying/linking stuff to persistent storage, ingesting environmental variables to change the way the container runs or functions.
    1 point
  7. A bit of an update. I changed the way the drives are attached. You can see it in the new photo. A few other minor improvements. I am thinking about making it a bit taller and putting fans under the drives to cool them by blowing straight up but to be honest the orientation of the drives, the spacing and the open concept seem to be doing a good enough job without fans. It would be cooler though, no pun intended. I have decided to start selling these so if anyone is interested let me know.
    1 point
  8. I took a look out of curiosity, but those images are nearly two years old, so I decided I wasn't that curious.
    1 point
  9. Docker should be available within the next few hours. Click on Add Container in the Unraid GUI search for your existing 7DtD Docker IMPORTATN: GIVE IT A NEW NAME OR BETTER A OTHER NAME, change the directory '7dtd' to something else, delete all ports and create new ones, start the docker, configure the server with the new ports and restart the docker (also make the ports in your firewall accessable), then you should be good to go. Very im portant is to delete the ports and create new ports with the container and host port set to the same as in the config.
    1 point
  10. Yes. Because how is unRaid supposed to know if it's cleared or not without reading each and every byte? Use another flash drive and run a trial on it. Preclear doesn't need to have the array even started for it to do its thing, so effectively the trial will last forever.
    1 point
  11. There's also a signature that gets written to the drive after the preclear runs so that unRaid knows that it's already cleared. Those other tools wouldn't do that, which would force unRaid to clear it again no matter what Make sure that AHCI is set in the BIOS for the sata ports
    1 point
  12. You can definitely pre-clear on another machine. It is surprising that you are seeing much impact from a pre-clear. I know I can pre-clear a couple of drives on my system without noting any impact on performance. Without more information on your hardware setup it is difficult to guess why you are seeing such an impact.
    1 point
  13. I have been running untangle, with home licens, in a VM the last couple of years. I have had a little switch to pfsense and opnsense, but went right back to untangle. Yes it cost money, but you get way more stuff ready to use and then there is a lot of graphs and data to inspect, if you got the time for it. Sent from my iPhone using Tapatalk
    1 point
  14. None of my players have reported any issues. I'm also running a 16k map too. Iirc ram tends to spike during larger horde nights. What's your general usage Mort?
    1 point
  15. I am running it also and haven't had any issues- I even jacked up the number of zombie and animal spawn and increased the size to about double of the RWG defaults. I have 4-8 players on the server for horde night and seeing no issues(besides the horrible optimization of the always in alpha 7 days to die). Contact the fun pimps and send your logs- they might see your issue right away.
    1 point
  16. Samsung are known to work fine, at least AFAIK.
    1 point
  17. Updated and you nailed it, it's working perfectly. Thanks for your support, I appreciate it. It's my payday on Friday
    1 point
  18. I will take a look i to that, but i need you for testing, i don't own the game. I will take a look into that, give me a few days.
    1 point
  19. Most Ryzen workarounds are discussed here.
    1 point
  20. First of all backup all your important data inside the VM. If there are sections of the disk with errors which are used by the VM and files which aren't used that often sitting on them you might not see a error. You can try to use chkdsk inside the VM. I never used it inside a VM but in theory Windows should find bad sections and should try to repair it, move the files to good areas of the disk. After that you could try to use a imaging software like Acronis or Clonezilla inside the VM to save the vdisk and later write back the image to a new vdisk or you directly clone it to a new attached vdisk.
    1 point
  21. After you have unassigned parity2 you need to start the array to get Unraid to ‘commit’ the change.
    1 point
  22. Between those two, definitely USG. I don't believe the ASUS RT-AC66 gets merlin builds anymore either. It's rather old. Yes brother, you're not alone in feeling that way.
    1 point
  23. Personally, I would update to 6.7.0 I had the odd problem throughout 6.6.x and early 6.7-rc's where containers wouldn't stop properly. At some point in the middle of 6.7 rc the problems disappeared.
    1 point
  24. Yep, but only the standard Garry's Mod since i don't own the game and don't know how mounting in this game of the other gamefiles work. Edit: Garry's Mod should be available within the next few hours. Don't thought that the logfile rotates every day... Updated Docker is available in the next few hours Please report back... Edit: Docker is updated please search for updates and report back if it works now...
    1 point
  25. Thank @cybrnook. I never would have thought you could disable the mitigations if it wasn't for him.
    1 point
  26. The VM Manager libvirt.img file is mounted at /etc/libvirt so its contents are persistent across reboots and any changes made under /lib/virt are also persistent.
    1 point
  27. Let me know if you get how too Sent from my Pixel 2 XL using Tapatalk
    1 point
  28. @CHBMB I too see this high power consumption. I know why it's happening, too. Basically, the nvidia driver doesn't initialize power management until an Xorg server is running. The only way to force a power profile on Linux currently is to use nvidia-smi like so: nvidia-settings --ctrl-display :0 -a "[gpu:0]/GPUPowerMizerMode=2" Which requires a running Xorg display. I've been trying to dig around in sysfs to see if there is another place that this value is stored, but there doesn't seem to be. It looks like the cards are locked into performance mode... Perhaps this is worth bringing up to nvidia? In the meantime, I'm going to continue digging to see if I can find a way (perhaps an nvidia-settings docker?) to force the power state.
    1 point
  29. These usually work well: Tunable (md_num_stripes): 4096 Tunable (md_sync_window): 2048 Tunable (md_sync_thresh): 2000 If you're sig is correct and you're still using the SAS2LP also change this one: Tunable (nr_requests): 8
    1 point
  30. Just an addendum. While enterprise disks can perform at optimum speed without trim, they can still take advantage of trim to reduce the write amplification. If a 128 kB flash block contains 64 kB erased data and 64 kB current data and the OS wants to rewrite the 64 kB of current data then the disk has two options - if the 64 kB erased data has been reported through trim, then the drive can erase all content on that flash block while finding a suitable flash block for the new 64 kB write. So 64 kB of file writes results in 64 kB of flash writes. - if the 64 kB erased data hasn't been reported, then the drive will pick a suitable spot for the 64 kB write. Then it will still think 64 kB data is valid on the original block. So when it later decides to erase that flash block, it first has to copy 64 kB to another block. So a single 64 kB file write results in 64 + 64 kB of flash writes. So without the trim, there was a larger write amplification. Another thing to consider here is that unRAID, like SnapRAID, doesn't stripe. So with 4+1 system with a traditional striped RAID 5, a 1 TB file write will result in 1.25 TB of flash writes since the parity adds 25% additional surface. With unRAID, a 1 TB file write means the parity drive also has to handle a 1 TB write - so in total 2 TB of SSD writes. In a traditional RAID 5, all drives gets the same amount of wear. You can select enterprise or normal drives depending on amount of writes the system needs to handle. In unRAID, the parity drives would need to be expensive enterprise drives to be able to handle the additional number of TB of writes. Or the user would have to replace the parity drives 5-10 times more often depending on number of data disks. One strong part about unRAID is that the unstriped data means only needed HDD has to be spinning. This is excellent for media consumers. With SSD this doesn't really matter because they don't produce noise and consume as much power which means that one of the unRAID advantages would no longer hold true. unRAID would still have a big advantage that each drive holds a valid file system so you can't get a 100% data loss because you lost one drive more than the number of parity drives. In the end, I think it would be better if unRAID could add support for multiple arrays - mixed freely between unRAID arrays and normal RAID-0, RAID-1, RAID-5, ... than to focus on running a strict SSD main array. Especially if unRAID could still merge multiple arrays into unified user shares. That would allow people to select two-disc SSD RAID-1 for applications and special data while keeping the huge bulk data on much cheaper HDD.
    1 point
  31. Nice workbench concept. Well executed. But ... If its like my basement, there is storage stuff coming and going, dog winds up down there from time to time, xmas decorations moving in and out once a year, looking for tools, rummaging through old boxes of junk. This is too open and easily knocked by an unsteady human with awkward load. One steadying hand lands on one of those drives and it is going down, getting twisted and mangled due to the screws trying to hold it vertical against an adult body weight. And then crashing into its neighbor with metal grinding against metal and a spark or two as the electrical cords get pulled out violently and cables rip from their connectors. Mangled fingers, cuts, screams of pain, emergency room, fire department, law suit, wife on the warpath (understatement). Massive data loss. Who knows! And if it is you that did it, no one else to blame! I can see no happy endings with one accident. Maybe if the top would flip over and drives dangle down. That would help. I'd like giant cover for this thing with fans blowing in and out that could be removed when you wanted to work on it. Would block balls, pets, kids and other humans - generally make it a lot safer. But leaving it like this 24x7 is a liability concern and puts the data at risk. No way!
    1 point
  32. It would be better with the find command. Read up on find. Try something like find '/mnt/user/Media/TV Shows' -type f -name '*.jpg' -ls This will list all your files that match the specs. If it's what you want you can do find '/mnt/user/Media/TV Shows'-type f -name '*.jpg' -ls -exec /bin/rm -vi {} \; or you could take away the -vi and just do -v The quotes are necessary, and so is the semicolon.
    1 point