Jump to content

Lebowski89

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Lebowski89's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Hi all, I've been reading (only a little) into Proxmox lately, and the one thing I've noticed is that those recommending SSD/NVME drives recommend enterprise grade / data-center level drives, as ZFS + Proxmox can write massive amounts to your drive to the point that it could easily use up the life of a consumer SSD. When I think about the typical user of UnRaid, they're usually one-step up from those running your prebuilt consumer NAS (Synology, etc) and have an array that would consist of some WD Red, Iron Wolf, maybe some Exos, frequently shucked Barracuda Pro, and so on. With the main requirement being CMR and not SMR. For SSD/NVME, I would think the typical UnRaid user would be throwing in consumer SSD like the Samsung Evos/Pros (stuff rated around 500-800TBW for a 500GB drive). So, basically, has the inclusion of ZFS in UnRaid, and it being pushed as the best thing since sliced bread (by many), led to a situation where UnRaid users may be killing their consumer SSD cache drives quicker than XFS / BTRFS cache pools? Or, is this simply a case where the combination of how Proxmox works leads to a situation where ZFS formatted drives are heavily written to? When I read pages like this, https://unraid.net/blog/zfs-guide, and when I watched Ed's tutorials, the only thing I noted was that ZFS may have more resources requirement (as in more ram usage, maybe more CPU usage) - not that it would require the purchase of more expensive drives. For me, I run an 850 Pro 512GB and 860 Evo 500GB in a ZFS mirror cache pool. I have one WD Red formatted as ZFS in my array and a separate RAIDZ1 pool, with both the cache and raidz1 pool sending snapshots to the ZFS array drive. My Pro has 8.5 years power on hours (2014 purchase, lived 5 years as a Windows PC drive) and 133TB lbas written, while my Evo has 4.10 years power on hours (all as an UnRaid cache drive) with 69.8TB written. I only formatted them ZFS a couple of months ago (BTRFS, previously), so the vast majority of that usage is not from ZFS. So for me, I haven't noticed much usage with ZFS - but I hardly use my cache drives too much. My appdata/VM/ISO, etc, live on the cache drives, but the majority of write heavy processes are written directly to the array. So I'm probably not a good example.
  2. Was bored so messed with the EPC on my Exos and other Seagates. Personally, the drives are rated for 600,000, so it was more of an OCD thing to me. For example, one of my Barracuda has 50,000 load cycles in 6 years, and my 2 EXOS have 25k in 2 years. Yes, it's a ton more than drives that don't do that stuff (I have 8-year-old WD Reds with under 500 load cycles), but the drives are built for it, and can obviously handle it. I think I would be worrying about other problems before any of the drives got near 600k load cycles. But hey, I got around to dealing with it eventually.
  3. I'm interested in this not as simple as a page refresh wipe. How do I go about it?
  4. Hey guys, As the title suggests, I simply cannot use http//containername:port to connect to other containers. I always have to use http://serverip:port. This happens regardless of whether I'm running docker containers from the app store, or whether I'm running compose stacks in portainer. I have made sure that the containers are on the same custom bridge network (see attached pictures), but still no dice. While things work fine with http://serverip:port, it is annoying. I also can't use the custom bridge ip assigned to each container to connect to others, it's always the serverip. Any ideas? Thanks compose.yml
  5. Finished and it 'Completed without error'. I think I'll get rid of Scrutiny. Don't need that negativity in my life. Apparently Scrutiny has had multiple issues with Seagate drives before. ST10000DM0004-20240412.txt
  6. Hi Jorge, Doing an extended Smart Test on the drive, but it's taking a while (30% at the moment). Here is the attributes page as is.
  7. Getting some label UI oddities here and there. For example, with Traefik, I've put the correct URL (http://192.168.##.##:port) in the Web UI column but it opens the localhost IP without a port. Then for Whisparr the WebUI is also correct, but when you open it the IP you've entered is doubled (http://192.168.##.##/192.168.##.##:port). Have tried stack down, up, updated, deleted containers, etc.
  8. Hi, A few weeks ago I installed Scrutiny (first from app store and then moved to Docker compose), and everything was great - all my drives passed with no problems. In the last week I have since converted an array drive to ZFS and also removed 3 drives from the array into their own ZFS pool. After this, I rebuilt parity. I then checked Scrutiny a few days after this and my parity drive is listed as failed. None of the critical values have failed, but it has given me a warn for Spin up time, High fly writes, and a fail for hardware ECC recovered. UnRaid lists the drive as healthy, and the drive is benchmarking the same as before it was listed as failed (using DriveSpeed). I'm currently doing an extended SMART test (will take some hours). The drive is a Seagate Barracuda Pro - 10TB - 5 years power on, used as a parity drive its entire life. I haven't messed with the Seagate power management settings, so the drive has quite a lot of load cycle count, but well within rated spec. Should I be concerned here? Or is this just Scrutiny not playing nice with Seagate drives? TIA
  9. Welp, the menu I have been skipping over the whole time was the one thing I was about to come ask about. Works fantastic. Have switched every single one of my docker containers over to compose (with help of Composerize). Edit: I notice only .png are working for the icons. Using an .svg displays a blank icon. This differs from Docker Folders which accepts both. Is this intentional?
  10. Hi, Have been running 2x 10TB Seagate Exos in my array for nearly two years. I never messed with EPC/powerBalance or any of that. They've been parking their heads as they are known to do, and have clocked up over 20k LCC during that time - 10x the amount of some WD Reds I have with 8 years power on time. The Exos have been performing great, no issues. Reading up on this on Reddit, there seems to be range of opinions, but mostly divided between people that freak out and load up seatools to disable EPC, and people who are not fussed as the drives are rated to over 600k LCC. I've been getting around to converting some drives to ZFS this week, and am wondering if I should bother trying to lower the rate of LCC on them? Being rated at 600k LCC, I won't be hitting that for years if at all with these drives, but people have me paranoid.
  11. Good script. Found the options for rclone/mergerfs a bit off for having files sent to the local directory. Files were being uploaded straight to the remote. Sub-optimal if you want to use something like cloudplow for uploads, or don't want uploads. Found these do the job for a Google remote (directory cache time set high because the remote will be polled frequently for changes, but if you were running something like an sftp remote you'd have a very low directory cache time): (With local folder mounted first). Essentially the same options as Saltbox uses. category.create=ff being the key setting to have files actually show up in the local directory.
  12. Edit: Post removed. Permission issue fixed by adding --uid 99 and --gid 99. Docker apps successfully mapped remote files. Cheers!
  13. What do you mean? Did it the same format as Saltbox. It works - remote files are mounted in remote and merged into mergerfs. And to test if it was working correctly, I copied some files into the local folder and they were successfully merged into the merferfs folder to show up with the remote files.
  14. This is what I'm aiming for, btw. Missed this post before I made mine. I'm just about there, just dealing with some permission issues with the docker containers trying to access the mergerfs folder. In my case, I have: The remote gdrive files correctly show up on the remote mountpoint and are successfully merged in the mergerfs mount point. With radarr and co's docker data path pointed at: /mnt/user/data/ But I just tried to add the mergerfs folder as the root folder in radarr and it was a no-go. Permission issue. 'Folder '/data/mergerfs/Media/Movies/' is not writable by user 'hotio' (using hotio containers for the most part). Will double check permissions and mount points and will and go from there.
  15. I see. Well I assume deleting the files would be something Google would let me do as they're the ones wanting me to free-up space. I'm not actually at the read-only stage yet, but it happens eventually. To quote some random from Reddit: So I'll call it 'read-only' in the Google, offered unlimited storage for years and years and is no longer doing so, sense of the word. What I'm thinking is, maybe I can have unionfs/mergefs mount the local media folder and cloud files, so that radarr/lidarr/sonarr think they're all housed in the same directory on the same file system, and then just not doing the next step that seedbox/cloud solutions (like Cloudbox/Saltbox) do and uploading the files to the cloud with cloudplow. I assume that radarr/lidarr/sonarr would still be able to delete the previous file (the one that would be located in the cloud), and then simply store the new one locally on UnRaid.
×
×
  • Create New...