denishay

Members
  • Posts

    95
  • Joined

  • Last visited

Everything posted by denishay

  1. The built-in AMD grqaphics are definitely not that great when transcoding 4k. 1080p should be fine though. As you have the hardware already, I suggest you simply test unraid and see how this fares for your transcoding needs. Note that Plex doesn't offer transcoding in its free version, you need either a paid plan (subscription) or the "unlimited" version which costs way more than it's worth.
  2. Confirmed for me too. Uninstalled ZFS MAster, and disks pin down again... good as here in the UK, cost is prohibitive
  3. Hi all, I have been experiencing some (what looked like) "random" file deletion in a share. Basically, I have a "downloads" share where several dockers leave their download files (whether torrents or otherwise). Had no problem whatsoever before 6.12. This was on a separate cache pool having a unique cache drive. I was moving files manually when and if required. Since 6.12.14 (and now .16), I have been having issues with files mysteriously disappearing from that folder. Whenever they were torrent dowload results, I still had a ".fuse_xxxxx" file leftover as the file was still being "in use". The "new" download share is not on 1 drive, but on its own separate cache pool of 3 zfs drives (3x1TB nvme as raidz1). that pool was set with Cache as main storage and array as secondary... I was foolishly thinking it wass good as things would thenm silently get transfered to the array on a regular basis. It turns out that the mover process, when running, does NOT transfer to the array. It simply DELETES the files. So (to me),, it seems thast there is a bug either in the mover process, or in the way ZFS pools are handled. Could it be they are not considered as "disks" instead of "shares" and we are hitting the old do_not_copy_from_share_to_disk bug? (or something to that effect, I recall there was a situation that could result in data loss). In any case, if I disable the second storage media and leave "cache" on its own without storing on the array, files deletion stops. Anyone could confirm and/or help so we can narrow down the issue for sure and get that fixed? I am sure that under no circumstances the mover should be in a position to lose data, regardless of the choices of the end user
  4. Couldn't find it in this thread, but here is the solution that fixed the "can't login anymore" for me. Taken from: Adding this line under [Preferences], in the config file, works, for setting the default password manually to: adminadmin WebUI\Password_PBKDF2="@ByteArray(ARQ77eY1NUZaQsuDHbIMCA==:0WMRkYTUWVT9wVvdDtHAjU9b3b7uB8NR1Gur2hmQCvCDpm39Q+PsJRJPaCU51dEiz+dTzh8qbPsL8WkFljQYFQ==)"
  5. If it's mostly for you only, you might even consider something that doesn't really need transcoding then, if you are happy top download to your device what you would be playing (phone I suppose?). If you got a decent enough upload bandwidth, maybe just VPN in and then enjoy your content the same as you would on your LAN. That means that any setup would do. And what you chose on PartsPicker is way overkill for any of that. One last word on the drives: 12TB @ $219 seems a bit high. You would be able to get less drives at a better ratio. Like the current Seagate Exos 18TB, very often around that same price or barely $10 or $20 more. Plus, they have a 5 years warranty (WD Red Plus only 3 iirc)
  6. One of the most important points here will be: who will be accessing your media content (only you, or friends and fanilly), over which system (Internet, LAN only, etc.) and using which devices (tablets, phones, smart TVs, etc.) Because if you have many people accessing content remotely on not-so-great devices (phones or old tablets for ex.), you might require transcoding. And it doesn't come for free for Embby. Only premium users (so with paid subscription) will have that. If you want 100% free, you need to consider Jellyfin. If you want easy, Plex (here again, transcoding doesn't come for free). Getting the content is the easy part. Making sure others can easily access it (Plex app on their TV? Something else? Does the TV has a "private" appstore that doesn't have an Embyy client?) is quite another. So think about the whole setup and sofwtare side. The hardware part is the easy one to solve.
  7. Nice set of photos and nice build. If I may add... if you place your NVME controller in the first PCIe slot, you would have enough space wuth a custom bracket to hold 1 for sure, maybe 2 extra drives there
  8. Hi, and welcome to the rabbit hole of building your own system! Yes, that workstation would be an excellent starter system. You can easily add more drives down the line at no to low extra cost. Same for extra DRAM. Power usage should be fairly modest too. 2 warnings though: • Dell PCs are typically not "standard" (in that line, using BTX motherboard and same for proprietary PSU). That means that geting more power to more items will have to go through adaptors, and if anything wrong happens to the motherboards, you cvan't just "buy a new one" • When and if you want to "upgrade" anything else than RAM and drives, that will mean at the very least purchasing a new case+a new motherboard + a new PSU But again, if you got a good deal on that bundle, that's a good starter system
  9. Hi all, I know it's an "old" thread, but in 2023 in the UK, the kW is getting really expensive (£ 0.28 for me, and that's one of the "lowest" prices. It recently went above £0.58 for a period). I really need not only several power plug, ut a way to monitor them to actually make the appropriate changes (or not) in my equipment (so not just unraid) Any input and/or help from those having something in place would be great
  10. Adding the drivers did it for me on a fairly modest USB-C to 2.5GB/s adapter (https://www.amazon.co.uk/gp/product/B0CCCWJ9MQ/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1). Installed the Realtek USB drivers, rebooted the server, then went into the Network settings and switched my eth0 for the new adapter and left the "old" gibabit internal one for eth1. One more reboot and everything works like a charm! Pretty nice upgrade for £27 and using a port rarely useful for a headless server anyway (and saving some PCIe lanes that way too)
  11. OK. I guess I just got unlucky then. Every time I've tried anything beyond 32GB (always with the USB Creator in Windows 10, 2 times with 2 different 64GB drives and one time with a 128GB one), it failed. I understand it's a small sample size [ 3 in my direct experience), but it could have also to do with the fact that that was with rather old harware on the unraid server (fairly ancient motherboard) ]. Thanks for the confirmation though.
  12. Agreed, but starting the array without the old cache will "force" unraid to create those folders where available, hence change the settings. I know, that's what happened to me on a test setup not long ago
  13. I'm pretty sure it was not just about the drive but retrieving the whole config as it was before, including docker configs, etc. Adding the drives only will not retrive that
  14. Hi, Yes there is. • First, stop your current array • Go to settings > docker and disable • Go to settings > VM Manager and disable • Add your old cache drives to the array as they were • start the array IMPORTANT: do not enable dockers and VMs yet • Go to the array with a decent file manager (I rather recommend Midnight Commander, aka mc from the command line, but it's up to you) and delete the newly-created appdata and other folders which will have been created on the array as there was no cache • Once this is done, you can enable dockers and Vms, but make sure you select the advanced settings and point to the files which should now be back on your cache drive I hope I made things clear enough. Let me know if not
  15. Hi everyone, Sorry for the "thread necromancy", but I hesitated a long time between refreshing this topic a bit and creating a new one. For the mods, let me know if you prefer a new thrread to be created, I'm fine with that. I have used for many years the small "mini" usb drives as boot drives to avoid having something long and fragile sticking outside my unraid server cases. But then, even though I stuck to reputable brands, it seems that they always end up after a few years as "read only" [thus it becomes impossible to update plugins]. Fair enough, I went on now to use internal motherboard usb2-to-usbA adapters and "longer" sticks. To avoid the prolonged overheating that seems to break those mini boot drives (too) quickly. But then, it struck me, and please correct me if I'm assuming wrongly, but... The official recommendation is to keep using only "USB2" drives from reputable manufacturers, and at a maximum of 32GB. So... I am not disputing that *some* among us have different boot drives, but it seems that there is a largely mixed bag of bad experiences with anything non-USB2 and/or larger than 32GB. And nowadays, it is becoming harder and harder to get such devices [still possible, but requires a LOT of double and triple-checks]. At some stage, it will probably not be possible anymore. Shouldn't we be getting a bit more love regarding the choice of USB boot drives? (I'm not talking about wasting resources on SDD or aything else using precious SATA or PCIE ports, just USB). Can we see an official support for USB3 drives of larger capacity? It seems to me when browsing online sellers than 128GB is kind of the "new" minimum.
  16. Hi, Sorry for the reply in English, but is an English-pseaking forum. Each nextcloud docker will have its own configuration requirements. I highly suggest you read the documentation or support page for the docker you are using. That way you will know where you want to store your Nextcloud files. You will not get a final reply which fits all cases I'm afraid. Also, showing a Windows explorer window doesn't help much here. Also, even though there will be a "files" subfolder for each and every Nextcloud user, Nextcloud will most of all store its content and metadata in a database (mySQL or MariaDB one usually). Files alone are not enough.
  17. Several notes on my side. First, the "cache". Why? Your array is made of SSDs, why have a separate, most probably slower SSD as cache? Unless you're rocking 10GB/s in that tiny box, any SSD, even a SATA one, will more than max a Gigabit connection (and a 2.5BGps one). You mention a "backup" server, but it seems to be more a separate "test" server than a backup repository, is it? Or is it one that will be used as a target for backing up data? Quick questions: • Is the "cache" also a SATA SSD? If yes, I would probably have used that last SATA slot to get a larger raidz1 ZFS pool • I don't see the main array anywhere in your screenshots. Did you just hide it, or did you find a way to NOT have a main array?
  18. A quick "up" for this one, as I am looking for the same. If anyone has any more recent info, it is most welcome!
  19. I know this is a little bit of thread necromancy at this stage, but can anyone point me to this setting? I tried, but couldn't find it (using 6.12.4 here). Anf for what it's worth, I have the same problem. As soon as FUSE is involved, I get extremely sluggish performance (and really terrible when multiple small files are involved), but from disk to disk, no issues, I can max out the hardware. Same over SMB, but I guess it's going through FUSE in that case to find the files
  20. OK. Just to be clearer, it is due to the way SMB is configured/implemented/whatever on Unraid by default. Most of us have not played with those settings. Using anything else than SMB for transfers or any other system than Unraid with SMB all pretty much max out the Gigabit network. That was the whole point of my post above. I guess I didn't make it clearer: if there are better settings to be made, why are they not set by default on new setups? In my specific case, I tried modifying the SMB settings, but only got worse results (15Mb/s or less with Multichannel for ex.). As I said earlier, I have made my peace with that. I don't care and don't really need more. But claims that there are "no problems" with the way Unraid configures SMB by default and trying to have *users*/ go through a complex game of trials and errors is just not right. If "optimal" settings exist, why are they not set by default and/or some warnings issued?
  21. I do not mean to rain on anyone's parade, but for me and at least 4 other friends on Unraid, it is not the case. Unraid is consistently "slow" on SMB transfers, with anything between 30MB/s to 80MB/s max on a Gigabit LAN. If you refer to this Unraid vs TrueNAS comparison on Youtube above, you will easily see that the SAME HARDWARE and network performs 3 times as fast (yes, even after he re-did the same tests using ZFS on Unraid too. Now... I love Unraid, and performance is still "OK" most of the time for what I use it for (mostly media streaming), but no, sorry, SMB peformance on Unraid is notoriously bad for a reason. Acknowledging that would probably be a step towards having that thouroughly investigated rather than trying to find out what's wrong on the "user side" pretty much every time. I'm not saying that things cannot originate from the user's setups, but for many of us with many different systems and sometimes different servers, it is evry apparent that anything to and from Unraid SMB shares *is* slow. Much slower than pretty much anything else. It's a good thing it has loads of other qualities, but if SMB performance could get some love, it would make Unraid so much better!
  22. Would love any pointers as to which company that is though...
  23. Yep. Have that on my test server. One ZFS pool mounted to replace the only disk in the unraid array: mount -R /mnt/zfs /mnt/disk1 I run that at the start of the array. Works well so far. Only limitation with the ZFS plugin is that dockers and VMs had to be on the cache pool. Will try now with 6.12 rc2 to see if that can be changed back to the array
  24. You can easily limit ZFS ARC memory usage This will force ZFS to "only" use 8GB RAM echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max