Leaderboard

Popular Content

Showing content with the highest reputation on 05/04/21 in all areas

  1. Unraid 6.9.x ist draußen! Ich wünsche allen eine schöne Woche. Viel Spaß
    3 points
  2. This thread is meant to replace the now outdated old one about recommended controllers, these are some controllers known to be generally reliable with Unraid: Note: RAID controllers are not recommended for Unraid, this includes all LSI MegaRAID models, doesn't mean they cannot be used but there could be various issues because of that, like no SMART info and/or temps being displayed, disks not being recognized by Unraid if the controller is replaced with a different model, and in some cases the partitions can become invalid, requiring rebuilding all the disks. 2 ports: Asmedia ASM1061/62 (PCIe 2.0 x1) or JMicron JMB582 (PCIe 3.0 x1) 4 ports: Asmedia ASM1064 (PCIe 3.0 x1) or ASM1164 (PCIe 3.0 x4 physical, x2 electrical, though I've also seen some models using just x1) 5 ports: JMicron JMB585 (PCIe 3.0 x4 - x2 electrically) These JMB controllers are available in various different SATA/M.2 configurations, just some examples: 6 ports: Asmedia ASM1166 (PCIe 3.0 x4 physical, x2 electrical) * * There have been some reports that some of these need a firmware update for stability and/or PCIe ASPM support, see here for instructions. These exist with both x4 (x2 electrical) and x1 PCIe interface, for some use cases the PCIe x1 may be a good option, i.e., if you don't have larger slots available, though bandwidth will be limited: 8 ports: any LSI with a SAS2008/2308/3008/3408/3808 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, 9500-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed (most of these require a x8 or x16 slot, older models like the 9201-8i and 9211-8i are PCIe 2.0, newer models like the 9207-8i, 9300-8i and newer are PCIe 3.0) For these and when not using a backplane you need SAS to SATA breakout cables, SFF-8087 to SATA for SAS2 models: SFF-8643 to SATA for SAS3 models: Keep in mind that they need to be forward breakout cables (reverse breakout look the same but won't work, as the name implies they work for the reverse, SATA goes on the board/HBA and the miniSAS on a backplane), sometimes they are also called Mini SAS (SFF-8xxx Host) to 4X SATA (Target), this is the same as forward breakout. If more ports are needed you can use multiple controllers, controllers with more ports (there are 16 and 24 port LSI HBAs, like the 9201-16i, 9305-16i, 9305-24i, etc) or use one LSI HBA connected to a SAS expander, like the Intel RES2SV240 or HP SAS expander. P.S. Avoid SATA port multipliers with Unraid, also avoid any Marvell controller. For some performance numbers on most of these see below:
    2 points
  3. We will be making a bigger announcement soon, but for now 🔊
    2 points
  4. Indeed! Stay tuned 🙂
    2 points
  5. New repository is: vaultwarden/server:latest Change it in docker settings: Stop the container Rename repository to vaultwarden/server Hit Apply and start the container That's it. Don't forget to go to unRAID Settings >> click on Fix Common Problems (if the scan doesn't start automatically then click RESCAN) and you will receive a notification to apply a fix for *.xml file change. I just went through this procedure and can verify everything went smooth and well.
    2 points
  6. Hi all, Just spent the day creating a somewhat simple script for creating snaphots and transferring them to another location, and thought I'd throw it in here as well if someone can use it or improve on it. Note that it's user-at-your-own risk. Could probably need more fail-checks and certainly more error checking, but it's a good start I think. I'm new to btrfs as well, so I hope I've not missed anything fundamental about how these snapshots works. The background is that I wanted something that performs hot backups on my VM's that lives on the cache disk, and then moves the snapshots to the safety of the array, so that's more or less what this does, with a few more bells and whistles. - It optionally handles retention on both the primary and secondary storage, deleting expired snapshots. - Snapshots can be "tagged" with a label, and the purging of expired snapshots only affects the snapshots with this tag, so you can have different retention for daily, weekly and so on. - The default location for the snapshots created on the primary storage is a ".snapshots" directory alongside the subvolume you are protecting. This can however be changed, but no check is currenlty performed that it's on the same volume as the source subvolume. To use it there's some prerequisites: - Naturally both the source and destination volumes must be brtfs. - Also, all things you want to protect must be converted to a brtfs subvolume if they are not. - Since there's way to manage btrfs subvolumes that span multiple disks in unRAID, the source and destinations must be specified by disk path (/mnt/cache/..., /mnt/diskN/...). Note that this is a very abrubt way to protect VM's, with no VSS integration or other means of flushing guest OS file system. It's however not worse than what I've been doing at work with NetApp/vmware for years, and I've yet to see a rollback that didn't work out just fine there. Below is the usage header quoted, and the actual script is attached. Example of usage: ./snapback.sh --source /mnt/cache/domains/pengu --destination /mnt/disk6/backup/domains --purge-source 48h --purge-destination 2w -tag daily This will create a snapshot of the virtual machine "pengu" under /mnt/cache/domains/.snapshots, named something like [email protected]. It will then transfer this snapshot to /mnt/disk6/backup/domains/[email protected]. The transfer will be incremental or full depending on if a symbolic link called "pengu.last" exists in the snapshot-directory. This link always points to the latest snapshot created for this subvolume. Any "daily" snapshots on the source will be deleted if they are older than 48 hours, and any older than two weeks will be deleted from the destination. # snapback.sh # # A.Candell 2019 # # Mandatory arguments # --source | -s # Subvolume that should be backed up # # --destination | -d # Where the snapshots should be backed up to. # # Optional arguments: # # --snapshot-location | -s # Override primary storage snapshot location. Default is a directory called ".snapshots" that is located beside the source subvolume. # # --tag | -t # Add a "tag" on the snapshot names (for example for separating daily, weekly). # This string is appended to the end of the snapshot name (after the timestamp), so make it easy to parse and reduce the risk of # mixing it up with the subvolume name. # # --create-destination | -c # Create destination directory if missing # # --force-full | -f # Force a full transfer even if a ".last" snapshot is found # # --purge-source <maxage> | -ps <maxage> # Remove all snapshots older than maxage (see below) from snapshot directory. Only snapshots with specified tag is affected. # # --purge-destination <maxage> | -pd <maxage> # Remove all snapshots older than maxage (see below) from destination directory. Only snapshots with specified tag is affected. # # --verbose | -v # Verbose mode # # --whatif | -w # Only echoes commands, not executing them. # # Age format: # A single letter suffix can be added to the <maxage> arguments to specify the unit used. # NOTE: If no suffix is specified, hours are assumed. # s = seconds (5s = 5 seconds) # m = minutes (5m = 5 minutes) # h = hours (5m = 5 hours) # d = days (5d = 5 days) # w = weeks (5w = 5 weeks) snapback.sh
    1 point
  7. One plan would be not to install the parity drives until you have populated the array with the data from the old server. (That will increase the write speed to the array by a factor of two or more!) I would then consider just copying the data over the network. It sound like you have less than 12TB of data which means the copy time would be about 36-40 hours (if my math is correct!)
    1 point
  8. Welcome! nice! When you setup "remote access to LAN" you will be able to access other devices on your LAN through the tunnel. So from your phone you would first make a VPN connection to Unraid to get access to the LAN, then you would start the remote desktop software on the phone and connect to your personal laptop by IP. Yes you can have two VPN profiles/peers defined on your phone. Use "Remote access to LAN" when you trust the network you are on and just want to route the remote LAN traffic over WireGuard. use "Remote Tunneled Access" when you are someplace with "risky" wifi and you want all your traffic going over WireGuard.
    1 point
  9. reiserfsck --rebuild-sb /dev/sde1 Follow the instructions carefully, once it's done: reiserfsck --rebuild-tree /dev/sde1 This last one will take several hours.
    1 point
  10. xfs_repair didn't damage anything since it was run in read-only mode (-n), but it was a waste of time.
    1 point
  11. Do I read the Unraid's April digest right and you are doing some teasing ?
    1 point
  12. There are very few instances where this is true. Unraid disables a disk AFTER a write to it failed. That means that data that was sent to the disk didn't get written to that disabled disk, instead it only exists on the emulated copy. This write could be something inconsequential or an overwrite of existing data, or it may be a critical write that if discarded would mean a corrupted file or worse, a corrupted file system. The "safest" thing to do would be to do a full binary compare of the disabled disk and the emulated content, display the difference and allow the user to choose which copy is most accurate. That would not be a trivial process, and would have very little benefit over what's currently available, where you can browse the emulated disk and if it looks good, rebuild that content to the physical disk. The shortcut of just "mark drive as good" means you need a full correcting parity check to be sure all the bits that got written to the emulated disk that you just discarded are updated to keep parity in sync. It typically takes just as long to do a full parity check as it does to rebuild a disk, so you aren't saving any time.
    1 point
  13. i guess its possible, you dont of course require port forwarding at all for sabnzbd, worth investigating i think.
    1 point
  14. A possibility is to plug that drive into a Windows system and use a data recovery utility like UFS Explorer to see what it thinks is on the drive. UFS Explorer is not free if you actually want to use its recovery capabilities, but you do not have to pay for th option to simply scan the drive looks for files.
    1 point
  15. Great, thanks for the help! Let me make that change to the c-state and the RAM. I will update if this fixed the problem.
    1 point
  16. I had the same exact issue and after several weeks of trying to get it working I came across Tdarr. It's not as easy to setup, but hasn't crashed my system once. Not trying to knock Unmanic. Just saying there are other options.
    1 point
  17. Awesome plugin! Thanks a lot for making it!! Is it possible to display more than 4 sensors in the footer? I have 9 different sensors that I'd like to include and plenty of real estate in the footer to fit them all.
    1 point
  18. I am fairly certain that is not how the Post Arguments field is meant to be used. Its purpose is to add extra arguments to the end of the Docker run command, not append a completely new docker command. The problem you are facing stems from the fact that the Post Arguments become part of the docker run command, and thus only executes when the container is recreated not when it starts or stops. Unfortunately i am not really certain how to achieve what you are trying to do aside from forking the official Emby container and integrating your script.
    1 point
  19. This has nothing to do with the Driver itself. This is more a Plex "thing" since it also shows that Plex is using the GPU (I think the picture is after you stopped the playback) and what I can think of is that Plex uses the card a little longer after you stopped the playback. From my experience the memory usage is pretty normal if you transcode a 1080p file.
    1 point
  20. Ja. Genau deshalb bin ich nie auf Discord, ich will Freizeit auch noch haben. Ich hab das mal kurz versucht, bei ca. 140 Container bzw. Plugins die ich momentan hab kommen laufend Fragen und wie könnte es anders sein, oft auch die gleichen. ...und ich bin ein Mensch der einfach Antwortet egal wie die Umstände grad sind.
    1 point
  21. Eine solche GPU hat normalerweise nicht mal einen Grafikausgang und ist nicht für Gaming ausgelegt, sondern für CAD oder Rendering. Du schleifst die GPU ja als Gerät an die VM durch. Also so als würdest du die GPU in einen physischen Rechner bauen. Dann würdest du ja auch keine Server GPU einbauen. Je nachdem was man unter einer Firewall versteht, hat jedes Betriebssystem bereits eine. Willst du zB den Port 1234 des Unraid Servers attackieren, nutzt das nichts, weil dieser Port vom Betriebssystem gar nicht geöffnet wurde, da sich dahinter kein Dienst befindet. Würdest du dagegen den Port 80 attackieren, also die WebGUI, geht das sehr wohl, weil dahinter steht ja ein Webserver, der die WebGUI erst möglich macht. Was soll nun eine Firewall nach deiner Definition machen? Eventuell denkst du an Schutz gegen Zugriff aus bestimmten IP Bereichen? Also jemand aus China soll nicht auf die Unraid WebGUI kommen. Warum sollte er das können? Oder DoS-Schutz? Willst du etwa deinen Server im Internet verfügbar machen? (absolutes No-Go) Oder zu viele fehlerhafte Logins? Wer soll sich denn einloggen? Jemand, der per Phishing die Kontrolle über deinen PC übernommen hat, über den du dich auf die Unraid WebGUI anmeldest (= Login bereits geklaut)? Oder gegen die eigenen Kids? Da hilft VLAN oder einfach ein sicheres Passwort. Es gibt also einfache Grundregeln: - Server nicht öffentlich verfügbar machen - sicheres Endgerät für die Bedienung der WebGUI nutzen (zB ein separates Gast-Konto in Windows oder ein Nicht-Windows-Gerät, etc) Willst du zB Nextcloud öffentlich verfügbar machen, installierst du Nextcloud ja nicht auf dem Unraid Server, sondern gekapselt in einem Docker Container, der über eine eigene IP-Adresse verfügt. Wird also Nextcloud erfolgreich attackiert, ist der Angreifer erstmal in dem Container gefangen. Natürlich kann er dann immer noch Unsinn machen wie auf deine Dateien zugreifen oder Spam-Mails senden oder Viren als Download anbieten. Eine Firewall oder Virenschutz hilft hier Null, denn wenn die gegen eine Lücke in Nextcloud helfen würden, wäre die ja schon längst von Nextcloud geschlossen worden,. Fazit: Es gibt keinen 100%-tigen Schutz. Backups sind essentiell wichtig und mit dem Verlust privater Daten muss man rechnen, wenn man sie öffentlich erreichbar macht und nicht zusätzlich verschlüsselt. Außerdem darf der Server selbst niemals über das Internet verfügbar gemacht werden. Ok, du willst also über Parsec oder ähnliches spielen. Dann recherchiere welche Bandbreite Parsec benötigt: https://support.parsec.app/hc/en-us/articles/360001394931-Parsec-s-Network-Requirements- Unter Client gibt es nur "minimum" und da ist die Rede von 30 Mbit/s. WLAN schwankt und hat einen Verlust von 40%, also würde ich eher zum 3- bis 4-fachen raten. Also ab 120 MBit/s. Kannst du das beim Client gewährleisten, sollte das gehen. Wäre jetzt nichts für kompetitive Egoshooter, aber "normal" spielen geht damit natürlich. Ob das Endgerät wiederum genug Leistung für Auflösung und FPS besitzt, steht auf einem anderen Blatt. Also willst du zB 4K zocken, hängt es von der CPU/iGPU ab was die kann. Ich vermute mal ein RPI taugt dafür nicht. EDIT: Laut hier schafft die iGPU des RPI 4b bis zu 60 fps bei 4K: https://www.4kfilme.de/raspberry-pi-4-model-b-unterstuetzt-jetzt-4k-und-hevc/ Ob Parsec genau den Codec nutzt, den der RPI beschleunigt, muss man aber separat recherchieren. Es gibt übrigens noch die Alternative mit aktiven Verlängerungskabeln zu arbeiten (DisplayPort, USB2, USB3, etc). In dem Fall gäbe es kein Endgerät und damit auch keine Latenz. Siehe auch hier: Dazu gibt es hier schon ein paar Beiträge, aber ich meine noch hat es keiner endgültig umgesetzt.
    1 point
  22. 1 point
  23. Sorry, I got a bit tired yesterday I've spent a couple of hours today, trying to figure out what was happening. And guess what? I had the wrong port set up... 🙄 So, oops on my side. You see, when I started toying around with this, I hadn't yet had issues with the Unassigned Devices plugin, so I wasn't paying attention too much on things. Turns out, I had put in the port for my WireGuard setup, instead of an SSH port.. Once I got that out of the way, LuckyBackup did seem to be just happy with the SSH Key I generated from the tutorial. I tried restarting LuckyBackup just to double check, and it seems that it's not requesting for a password, and things just work! Now, I'll just have to change the setup of the rest of the profiles to not work through the Unassigned Devices, but rather directly through the SSH Thank you for all the help, I really appreciate it! EDIT: Oh, now that I'm here, would you happen to know if there was a way to utilize the "Also Execute" tab within LuckyBackup, to show a GUI notification within Unraid itself, to let me know the result of the backup (once the Cron works again for LB)
    1 point
  24. Hm, that's not bad, shipping is just 8EUR. Well, if 230c won't work, that could be the next try. It will take a while though, 230c is coming from China.
    1 point
  25. now it works perfect with the password, thanks
    1 point
  26. Thank you for the clarification. This rec was very helpful. I was able to resolve the issues.
    1 point
  27. May 3 21:21:40 Nagato kernel: md: import disk0: (sdc) ST4000VN008-2DR166_ZDH8RTH2 size: 3907017540 May 3 21:21:40 Nagato kernel: md: import_slot: 0 replaced May 3 21:21:40 Nagato kernel: mdcmd (2): import 1 sdb 64 3907018532 0 ST4000VN008-2DR166_ZGY94T2F May 3 21:21:40 Nagato kernel: md: import disk1: (sdb) ST4000VN008-2DR166_ZGY94T2F size: 3907018532 May 3 21:21:40 Nagato kernel: mdcmd (3): import 2 sdd 2048 3907017540 0 ST4000VN008-2DR166_ZDH8DGWH May 3 21:21:40 Nagato kernel: md: import disk2: (sdd) ST4000VN008-2DR166_ZDH8DGWH size: 3907017540 May 3 21:21:40 Nagato kernel: mdcmd (4): import 3 What you'll have to do then is add that drive into the array, copy all of the files from it to the smaller one(s) and then reassign everything.
    1 point
  28. Set up ST4000VN008-2DR166_ZGY94T2F as the parity. It's the largest Were these shucked drives? There may be a hidden partition on the other 2 as they are a hair smaller. Do the drives already contain files?
    1 point
  29. Yes mine is working Great. Make sure your underlying permissions are correct on the files as well as the dockers. My dockers are running as UMASK:000, PUID:99, PGID:100. I hope that helps.
    1 point
  30. THANK YOU ALL DEARLY!!!! It was 100% the controller. I thought I was going crazy when trurl said it was due to connection issues. Since everything was connected securely. JorgeB thanks for linked post. Both of you are rockstars!!!! I promise to get a UPS once the new home flooring is done. My wife will only humor my data storage addiction so far. Will also do my best to start educating myself on reading the diagnostics. Thank you both again for your time and help to a stranger. I hope you have a fantastic week and will do my best to pay your kindness forward.
    1 point
  31. I DIDN'T BREAK YOUR STUFF AND YOU HAVE NO PROOF.
    1 point
  32. Didn't seem appropriate to open a bug report for this, but I noticed in the shares page, the help ? section on the column headers makes mention of AFP still being a thing, despite it having been removed everywhere else.
    1 point
  33. Oh wow, I didn't know that was an option. Cool, I'll definitely look into that! I'll still miss incremental a tad, but the ease of use plus the ability to compress will soften that blow considerably. Thanks for the help everyone! It's much appreciated!
    1 point
  34. @JorgeB Hey man, Thanks for taking the time to look. I've removed the server from the rack and tossed it on my desk and have it hooked up now with all but 2 of the sticks of RAM removed. So far going on 18 hours with 0 error. I'm going to let it sit until Wednesday when some new RAM I ordered comes in, but so far it looks like a hardware issue. If I don't see any crashes until I'm ready to swap the RAM out I'll mark the topic resolved.
    1 point
  35. Sorry for this. Currently working on getting it back online.
    1 point
  36. I'm here looking for the answers to the same questions it looks like @cinereus has.
    1 point
  37. Your definitely not the assholes here....
    1 point
  38. Hey, please be nice or please go elsewhere. Thanks
    1 point
  39. That 'asshole' would be me. Please keep in mind that I have a full time demanding job, a child, a 10 month old and a wife outside of what I do in ombi. All of my free time is pretty much dedicated to working on the product. V4 was released as it's more stable than v3 and I needed to release.it at some point or I never would have. If you are not happy with it then I suggest your stick to v3 and if you want the voting feature ported faster, then you submit a PR or be able to contribute in some other way.
    1 point
  40. Disable storage oprom in mobo BIOS. Yes. Then as previous mention run UEFI memtest86.
    1 point
  41. Ah yes, thanks. It'd been so long since I configured my VM, I completely forgot about that. The issue is resolved now. I just need to edit it the correct way
    1 point
  42. Rather than the UserScript, give the docker flavour a whirl - search for vm_custom_icons app. (Note, code repository is held in GitHub - https://github.com/SpaceinvaderOne/unraid_vm_icons ) Although at 92gig's for a few icons, you might not want to
    1 point
  43. A few suggestions if I may, from my experiences in the Cloud Infrastructure World; First, Reviewing Docker Folder Mappings (and to some extent VM Shares). Do all you Docker Containers need read and write access to non appdata folders? If it does, is the scope of the directories restricted to what is needed, or have you given it full read/write to /mnt/user or /mnt/user0 ? For example I need Sonnarr and Radarr to have write access to my TV and Movie Share, so they are restricted to just that, they don't need access to my Personal Photos, or Documents etc. Whereas for Plex, since I don't use the Media Deletion Feature, I dont need Plex, to do anything to those Folders, just read the content. So it has Read Only Permissions in the Docker Config. Additionally, I only have a few containers that need read/write access to the whole server (/mnt/user) and so these are configured to do so, but since they are more "Administration" containers, I keep them off until I need them, most start up in less than 30 seconds. That way, if for whatever reason a container was compromised, the risk is reduced in most cases. Shares on my VM's are kept to only the required directories and mounted as Read Only in the VM. For Docker Containers that use VNC or VMs, set a secure password for the VNC component too, to prevent something on the Network from using it without access (great if you don't have VLAN's etc). This may be "overkill" for some users, but have a look at the Nessus or OpenVAS Containers, and run regular Vulnerability Scans against your Devices / Local Network. I use the Nessus one and (IMO) its the easier of the two to setup, the Essentials (Free) version is limited to 15 IPs, so I scan my unRAID Server, VMs, and a couple of other physical devices and it has SMTP configured so once a week sends me an email with a summary of any issues found, they are categorized by importance as well. I don't think many people do this, but don't use the GUI mode of unRAID as a day to day browser, outside of Setup and Troubleshooting (IMO) it should not be used. Firefox, release updates quite frequently and sometimes they are for CVE's that depending on what sites you visit *could* leave you unprotected. On the "Keeping your Server Up-to-Date" part, while updating the unRAID OS is important, don't forget to update your Docker Containers and Plugins, I use the CA Auto Update for them, and set them to update daily, overnight. Some of the Apps, could be patched for Security Issues, and so keeping the up-to-date is quite useful. Also, one that I often find myself forgetting is the NerdPack Components, I have a few bits installed (Python3, iotop, etc), AFAIK these need to be updated manually. Keeping these Up-to-Date as well is important, as these are more likely to have Security Issues that could be exploited, depending on what you run. Also on the Updates, note, if you have VM's and they are running 24/7 keep these up-to-date too and try and get them as Hardened as possible, these can often be used as a way into your server/network. For Linux Debian/Ubuntu Servers, you can look at Unattended Upgrades, similar alternatives are available for other Distros. For Windows you can configure Updates to Install Automatically and Reboot as needed. Hardening the OS as well, is something I would also recommend, for most common Linux Distros and Windows, there are lots of guides useful online, DigitalOcean is a great source for Linux stuff I have found. If something is not available as a Docker Container or Plugin, don't try and run it directly on the unRAID Server OS itself (unless, its for something physical, e.g. Drivers, or Sensors etc), use a VM (with a Hardened Configuration), keeping only the bare minimum running directly on unRAID, helps to reduce your attack surface. Also, while strictly not part of Security, but it goes Hand in Hand, make sure you have a good Backup Strategy and that all your (important/essential) Data is backed up, sometimes stuff happens and no matter how much you try, new exploits come out, or things get missed and the worst can happen. Having a good backup strategy can help you recover from that, the 321 Backup method is the most common one I see used. If something does happen and you need to restore, where possible, before you start the restore, try and identify what happened, once you have identified the issue, if needed you can restore from Backups to a point in time, where there was no (known) issue, and start from there, making sure you fix whatever the issue was first in your restored server. I have seen a few cases (at work) where peoples Servers have been compromised (typically with Ransomware), they restore from backups, but don't fix the issue (typically a Weak Password for an Admin account, and RDP exposed to the Internet) and within a few hours of restoring, they are compromised again. Other ideas about using SSH Keys, Disabling Telnet/FTP etc, are all good ones, and definitely something to do, and something I would love to see done by default in future releases. EDIT: One other thing I forgot to mention was, setup Notifications for your unRAID server, not all of them will be for Security, but some of the apps like the Fix Common Problems, can alert you for security related issues and you can get notified of potential issues quicker than it may take you to find/discover them yourselves.
    1 point
  44. Sounds like you might be a good candidate to be a guest on the show!
    1 point
  45. Hi - had the same problem. I did solve it with this command in ssh: So - reboot isn't neccessary.
    1 point
  46. The time has finally come! My wife and I are less than a week away from paying off our student loans! That means I can finally do my first custom UnRAID build (and she can get the iPhone and iWatch she’s been pining after). This has been a dream of mine for the past 2 years and I can’t believe the time has finally come! Up until now, I’ve been using a HP Z220 CMT with an i7-3770 and 16GB of non-ECC ram. I added two hard drive cages to the 5.25” bays, which has allowed me to have 6 x 3.5” hard drives and 2 x 2.5” drives in the case. Everything has been working well, and the iGPU, although very old, has kept power costs low when transcoding even if the quality leaves a lot to be desired. I’ve also maxed out my hard drive bays and have only 400MB left on a 46TB system. A new build with a bigger case can’t come fast enough. Since I just moved to a new city and have more people to now add to my Plex server, I thought it would be a good time to upgrade the whole setup to make it last for the next 5-6 years and just focus on hard drive expansion with any extra money I save along the way. I don’t have fiber at the moment (school quality won out over fiber), but once we decide to build a house in about 2 years, it’ll be in a part of the city that has symmetrical fiber. Right now, I’m stuck with 1Gbps upload and 50Mbps download, so I can only do about 8-10 720p 4Mbps transcodes (mostly from x264-DTS 8GB-20GB 1080p files). With symmetrical fiber I hope to do around 15-20 1080p 8Mbps transcodes at once, and I want my build to be able to handle that (or be easily upgradeable) when that time comes. Here’s what I KNOW that I will purchase or have already purchased: Case: CSE-836BE16-920B (ALREADY PURCHASED) Front Fans: 3 x Supermicro FAN-0074L4 Rear Fans: 2 x Supermicro FAN-0104L4 Heatsink: Noctua NH-D9L RAM: 2 x 16GB Kingston 2666Mhz DDR4 ECC UDIMM’s KSM26ED8/16ME HBA Card: LSI 9211-8i (ALREADY PURCHASED) I’ve narrowed everything down to 3 possible choices (yes, they are all way overkill for what I want to do). I’m leaning heavily to Build #1 but I still need some help in deciding. Build #1 CPU: Intel Xeon E-2288G Motherboard: Supermicro X11SCH-F GPU: NONE Build #2 CPU: AMD Ryzen 5 3600 Heatsink Accessory: Noctua AM4 mounting bracket (ALREADY PURCHASED) Motherboard: ASRock Rack X470D4U GPU: Nvidia P2000 (ALREADY PURCHASED) Build #3 CPU: Intel Xeon E-2136/E-2236 Motherboard: Supermicro X11SCH-F GPU: Nvidia P2000 (ALREADY PURCHASED) WANTS 15-20 1080p 8Mbps transcodes at once (mostly from x264-DTS 8GB-20GB 1080p files) IPMI -> I’ll be traveling quite a bit and want to be able to adjust the server/BIOS while I’m away…it’s not critical but it’s a very nice feature to have if you can’t be sitting around the actual server Stability -> Since I will be traveling quite often, I want this server to handle UnRAID updates easily and have no chronic issues…there are quite a bit of young kids relying on this server staying up constantly so they can watch their shows Upgrade Path -> This isn’t a big want but it would be nice if I could add/switch out a GPU or CPU in the future if/when 4K transcoding w/ HDR intact really becomes a thing for Plex Energy Efficiency -> I’m not expecting single digit wattage with any of these builds, and energy costs around here are only about $0.12 kWh, but I would like to have a lower wattage at idle so it’s not eating up a lot of power sitting there not doing anything Build #1 Analysis Pros: - Very powerful CPU with a Passmark around 20,000 - iGPU is the newest version available and will allow me to do ~15 1080p transcodes while keeping power usage low so the Nvidia P2000 isn’t needed and can be resold for around $300 (that’ll get me about 2 more 10TB drives) - The X11SCH-F, while it doesn’t have a lot of PCIe slots, it does have a dedicated IPMI LAN port and 6 x 4-pin fan ports (the X11SCA-F has neither) - The X11SCH-F will match nicely with my Supermicro chassis so all the sensors should be easily seen in the IPMI - Intel has historically been more stable than AMD in UnRAID and 14nm is a very established platform - Intel Xeon tends to have a very good resell value, especially if it’s one of the CPU’s at the top for a particular socket Cons: - It’s definitely the most expensive option - The CPU isn’t upgradeable - I can add a GPU in the future if I need it, but that would take up the remaining PCIe slot and it would really negate the iGPU for me - Intel has shown that security has not been their top priority for a while and this chip will probably see incremental decreases in performance as more security patches are rolled out Build #2 Analysis Pros: - Very powerful CPU with a Passmark around 20,000 AND at a very cheap price (<$200) - Overall this is the cheapest option of the three builds and I could use the extra money for more hard drives - The Nvidia P2000 should serve me well for transcoding and it has decent resell value in case I need to upgrade to a newer/better model in the future - The X470D4U has more PCIe slots than the Supermicro Intel options and is also cheaper - This build is very upgradeable in terms of CPU (can go up to 16 cores!) and GPU Cons: - Stability (at the moment at least) leaves a lot to be desired…between both UnRAID and the ASRock BIOS/BMC it’s just not 100% trustworthy right now…I expect it to be solid in the future with more updates though…I was really hoping 6.9.0 would have dropped by now and give me more of what’s in-store - The ASRock X470DU won’t match up 100% with my Supermicro chassis and I would lose out on the ability to monitor my redundant PSU’s - It looks like Ryzen is more efficient at higher workloads than Intel but it will definitely use up more electricity at idle…granted it’s probably only 10-20 watts but that is still significant extrapolated over several years Build #3 Analysis Pros: - The E-2136 and E-2236 are relatively cheap and I even found a used E-2136 for $200 - The Nvidia P2000 should serve me well for transcoding and it has decent resell value in case I need to upgrade to a newer/better model in the future - The X11SCH-F, while it doesn’t have a lot of PCIe slots, it does have a dedicated IPMI LAN port and 6 x 4-pin fan ports (the X11SCA-F has neither) - The X11SCH-F will match nicely with my Supermicro chassis so all the sensors should be easily seen in the IPMI - Intel has historically been more stable than AMD in UnRAID and 14nm is a very established platform - Intel Xeon tends to have a very good resell value, especially if it’s one of the CPU’s at the top for a particular socket Cons: - It has by far the lowest Passmark score of the three options and upgrading to an E-2278G or E-2288G in the future could be both costly and difficult because of their rarity - All the PCIe slots will be used up right away using a P2000 and 9211-8i - Intel has shown that security has not been their top priority for a while and this chip will probably see incremental decreases in performance as more security patches are rolled out Overall, I think I’ll be truly fine with any of the three option but if anyone has any suggestion on which one I should choose, I’m all ears. I’ve got a case of Paralysis by Analysis and any recommendations might help me figure out which one to go with. Thoughts?
    1 point
  47. You have a share, anonymized name 'D-------s'. It is set to cache-no, but all of its contents are on cache. Assuming you want it on the array, and to never use cache, set it to cache-yes, run mover, when mover finishes, set it back to cache-no. You have a share, anonymized name 'd-----s'. It is set to cache-prefer, and all of its contents are on cache. That is fine if that is your intention. I only mention it because it isn't one of the shares that Unraid creates as cache-prefer. You have a share, anonymized name 'D----r'. It is set to cache-yes, and some of its contents are currently on cache. Those will be moved to the array when mover runs. This is also fine if that is your intention. Your appdata share is set to cache-prefer, and it is all on cache. Some people prefer to make this cache-only but it should be fine like that if you don't ever fill up cache and you don't refer to cache specifically in your container mappings. Your system share is set to cache-prefer, and some of it is on the array. Mover can't move any open files, so to get this moved to cache, you will have to disable docker service (Settings - Docker) and VM Service (Settings - VM Manager), run mover, then when it is done re-enable services. Your share named 'VM Storage' is cache-yes and all on the array currently. Keep in mind mover won't move open files. If you make a new file in this share, it will be written to cache, and it can't be moved to the array if the file is open by a VM for example. All of your other shares look fine. They are cache-no and have no files on cache.
    1 point
  48. You don't remove it, after starting the array there's an option to format any unmoutable disk, next to the stop array button
    1 point