paqman

Members
  • Posts

    35
  • Joined

  • Last visited

paqman's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Thanks a bunch, appreciate the help.
  2. Ah, that makes a lot of sense. I think that's exactly what's going on here. I was using the CLI using the mv command to move the files from one share to another. It is definitely just renaming the path, because it is instantaneous, no copying going on. I guess the preferred way to avoid this would be to use a file system explorer like krusader or something? (Or use the copy command, and then delete the file) If I just set all those shares to Cache: Yes, can I just continue doing it the way I have been, and it will clean them up and move them off cache daily? I like that in the moment, the move is quick and easy, with it then moving the files later for me in the middle of the night. Just making sure there aren't any other drawbacks to moving them on the CLI like that.
  3. Right, I figured as much, I guess I was just trying to figure out how they got there in the first place. Here's how the flow went: The files were downloaded to a share that is set to Cache: Yes. Then from there on the command line, I did a move command and moved them to the other share that has Cache: No. I'm wondering if that's how why they were in cache. Either way, I'm just going to set those shares to Cache: Yes, because I don't want them sticking around in cache. Seems so backwards lol.
  4. I apologize, I know this is an often discussed topic, and I felt like I had a pretty good understanding of how the cache and the mover work. But now I'm a bit confused. I noticed that a few of my shares had the yellow warning "Some or all files not protected." Their cache setting is set to "No", so I assumed that meant that writes to those shares would never go to cache. The mover won't move anything on them. But I drill into my cache drive, and there are a bunch of files from those shares on cache. Most are files that I have added to that share pretty recently. So with it set to "No", they are stuck there until I set cache to "Yes", and run the mover. Any idea how files from those shares got to cache with it set to no? I guess I should just leave them set to Yes anyway, and then it will just use the cache, but they'll get moved to the array every day? Is my assumption about the "no" setting incorrect? It must be because I thought with it set to no, it would just never use cache for those shares. I turn cache to Yes on that share, run the mover, and it moves the files as expected, and the share goes green. Am I misunderstanding the "no" setting on cache?
  5. Thanks, you're right, I dug around and found the same thing. Looks like 70C is the max temperature. Maybe I'll set 55C as the warning temp, and 65C as a critical alarm. As for setting temp notifications for different drives, it looks like you can just go to Main, and click on each drive, and set them individually on each one. Not sure though how to set it for drive types. Just each individual drive. Thanks for the help! Not sure why I didn't think to go look for specs on the drive lol.
  6. For quite a while now I have been getting alerts saying my cache drive is hot. 2 minutes later, the alert would be cleared. But this would happen constantly. A constant rollercoaster above and below the threshold. Multiple times daily. The warning temp threshold was set at 47C. None of the alerts ever showed it was hotter than 49C that I could see. I have set the threshold to 50C for now to calm the alerts, and see if it ever really hits 50C. But I am curious if there is anything I can do to help this. The cache drive is this Crucial 1TB NVMe drive. Here is the shares using cache and what they are set to: appdata -> prefer (transmission may be the most active app in here, but it is not in use on a daily basis) domains -> prefer handbrake -> Yes (This container is not running, I don't currently use it) isos -> Yes (I only have 1 Ubuntu VM running right now) system -> prefer transmission -> yes Any thoughts on a way to run this better or things I don't really need to use cache for that I could cut down on how busy my cache drive is? Or should it be able to run this busy without getting that hot? Just wondering what is safe to run it at.
  7. I was planning on using s3sync to make daily backups to my AWS bucket. But now I'm wondering how it going to handle doing a daily sync of 3-5TB of data daily. Obviously I only want the changed files backed up. Currently I'm using Cloudberry Backup, and it works great, but I don't want to pay for their unlimited tier, and I just was hoping to go with something simple. I can't really tell the difference between s3sync and the s3backup apps. s3sync seems to work, but it does seem to have to scan every file every time it runs. I guess I'm fine with that, it won't take all night or anything, I just want to make sure it's not going to attempt to upload every file every time or anything. Only wanted new or changed files uploaded. I guess without a specialized tool, the AWS CLI is not built to evaluate for changed files, so all it can do is scan every file. But I assume it's going to evaluate timestamp and size, and only upload changed or new files. I guess I'm just curious what the difference is between the two tools, and what you all are using to backup all your files to your S3 bucket on a daily basis?
  8. Edit: I figured out my main issue of syncing to a specific folder in my bucket. I had a typo in one of my settings. However, if there are any thoughts about my other two questions, ideas would be appreciated. Mainly if I could set up multiple different sync folders. Now that I've figured out my main issue, I may be able to figure this one out. Maybe it takes spinning up multiple containers, one for each sync. But again, if you have ideas, that would be great. I am almost complete setting up my new unRaid server, which will be taking the place of my current file server, running on Ubuntu. My current server uses Cloudberry backup to backup my files to my AWS S3 bucket. So currently everything is already backed up in a path like, s3://mybucket/CBB_ubuntu-server/CBB_VOLUMES/ As far as I know there isn't a way to relocated the data on my bucket to a different folder (I'm storing it glacier deep archive), otherwise I would try to move the contents up a directory or two so that it would be at the root of my bucket. But I don't think that's possible. So what I'm trying to do is just get the data on my current server backing up to that same location in my bucket. I'm trying to use the s3sync docker app to do this. But docker container aside, I don't even know the right CLI to make s3 sync go to a different subfolder in my bucket. As it is now, if I configure it as default, it will sync everything in /data (container path) to s3://mybucket. But is there way to tell it to sync to s3://mybucket/CBB_ubuntu-server/CBB_VOLUMES/? I've tried just adding that path to the bucket field in the container, but doesn't seem to work. Anyone know how to do that? Also, it seems that this container is just built to sync one folder and it's contents to my bucket. But what if I want to set up different folders to sync? I've obviously had to re-structure my data quite a bit due to the way shares are setup in unRAID. So I'd like to set s3sync to backup on the one share that is already backed up to my bucket (and not have to write much because it's already sync'd), and then set up different syncs for each of my other shares. Maybe to different buckets, but more likely to just different sub folders in the same bucket. Really just trying to avoid re-backing up all the data in my S3 bucket, incurring more transfer charges and just taking a long time lol. Any ideas would be appreciated.
  9. Hard drive came and was factory sealed, and everything looks to be brand new to me. SMART data shows no run hours. I looked up the warranty information on Seagate's site, and it does have the original warranty good through Aug. 2026. Pretty sure it is a new drive. I'm cool with it, especially at that price lol. Running it through the binhex-preclear ringer right now. Which means I'll be able to use it next month lol.
  10. Yeah I could fill it up with some archive photos and videos we rarely access. (That are also backed up to AWS storage.)
  11. Ha wow. Well I don't know that's just what I've read on refurbs. I shucked an old WD 3tb drive I've had forever, and put it in today. It's a green. Not sure I want to keep using it. The spin up time on it says 9 years lol. Then I looked at my order history. I bought it in 2011. It has no errors and says it's healthy. Like, I don't want to throw it away, but I will say that pre-clearing it took forever. Pretty slow speeds. So I wonder if that alone is a sign it's time to let it go.
  12. Yeah I just think smart data is usually wiped on refurbs. And I'm not opposed to refurb, but feel like I could get a better price. Although not much better. And I would prefer new, but whatever. It comes tomorrow, I'll see what I get.
  13. Ha yeah, I hear that. They definitely were better in years past than recently I think. However I haven't had any bad experiences. That being said, I'm not a high volume buyer. Probably once every 3-4 years do I ever buy stuff from them. But thanks for the S/N lookup idea, I'll look into that.
  14. I ordered this Seagate Exos 14TB enterprise dirve from NewEgg yesterday to use as the parity drive in my new unRaid server. The seller was listed as DealsADay, and was definitely listed as new, but I paid only $175 for it, which was more common with the price of the refurbs they sell. Shortly after I bought it the price went up to $209 (more normal for that drive). I suspect maybe I got lucky on a price mistake or something, and maybe my purchase clued them in on it? I even contacted NewEgg support and they verified the one I ordered was listed as New, not used or refurbished. I guess I just want to verify that the drive is new, just to make sure the seller didn't slip me a refurb when I clearly purchased new, in spite of the price. Knowing that most refurbs have the SMART data wiped, is there any real way to tell if the drive is new or refurbished when I get it? Should get delivered tomorrow.
  15. Ah thank you very much, that makes sense. I guess I must have been reading documentation for pre-v6 unRaid.