ZerkerEOD

Members
  • Posts

    50
  • Joined

  • Last visited

Everything posted by ZerkerEOD

  1. I am running into an issue, out of no where my Postgresql container has started changing my appdata/postgresql directory owner to UNKNOWN and permissions of 600. It happens every time I try to run the system. I can change the owership and permissions to 777, but as soon as the docker tries to start it resets everything back to UNKNOWN 600. Any help would be great. I tried adding the PUID and PGID to 99 and 100. The docker won't even start so I can't poke around in that.
  2. I am struggling here. I am trying to change from transmissionvpn to delugevpn. On transmission I can reach all my ports when I have the --net=container:tranmsionvpn. But when I change them to use the deluge client, and restart the stack, the containers have internet, but I can't reach their management ports anymore.
  3. Okay, I did some more digging. It looks like the file permissions change when I start the Postgresql container. I am not sure why this is happening. It is setup to not update and sit at version 14. All of a sudden it just started doing it. Running it as root doesn't help adding the PUID and PGID of 99 and 100 doesn't fix it either. There doesn't seem to be any logs I can find that show it changing in docker logs.
  4. Okay, this happened again today. Here are the diagnostic logs. I hope someone can really help me out. nasty-diagnostics-20230612-2344.zip
  5. I am struggling. I have a container setup for authentik that relies on a postgresql14 container. It was working flawlessly until recently. Over the last couple weeks, I get notified that authentik cannot reach the database, so I go looking, and the appdata share looks good, but the postgresql14 appdata directory is now owned by UNKNOWN with only owner rwe privs. The postgresql instance will not start because it is unable to read the files. So, I have been having to go and manually alter the ownership, and privs and it runs fine for a day or two then bam, it died again. I thought it could have something to do with my backup (using CA backup for docker container). So, I exempted the postgres container from this process and it is still happening. I just uninstalled it as I have switched to BTRFS snapshots for backup, I am hoping that fixes the issue. I just restarted the entire system, so I am not sure what the logs will have. If it does it again, I will try to pull the system logs shortly after getting the notice. Has anyone else had this happened, and if so how did you fix it even if it was something else.
  6. Awesome, thanks for the tips. That worked. Out of curiosity, if I took a snapshop while a VM was running would that cause any issues if I needed to recover from the snapshot?
  7. Can I not make an existing folder a subvolume?
  8. Nothing, it was the directory I was putting the snapshots in. So I feel I am messing up. These are my current directories on vmstorage. I am wanting to get rid of appdata-backup in favor of the snapshots. Currently, I have the script that stops all containers, then copies the appdata directory into the appdata-backup and then restarts them all. But I read somewhere that taking a snapshot is better and you can leave the dockers running. So right now, I just want to snapshot the appdata directory.
  9. I cannot get this working. Every time I take a snapshot the folder is empty. And here is the directory: What I am trying to do, I moved my appdata for the dockers to this drive that is BTRFS, I would like to take a snapshot of it and then use duplicati to backup the snapshot online.
  10. Is there a way to upload a lot of icons to select from instead of individual?
  11. That is a lot of good information and will help me out. But at the time I was seeing the issue, none of my shares were set to yes: prefer was the domain. I will set appdata to my prefer my cache, domain will be on vmstorage and set to preferred, will set system to prefer cache. The anonymized one is probably my data directory that I have all my storage for my old Dockers (not appdata) that contains my media and storage files. One thing I would question is if system should be on a fast pool, why is it default to no? As of right now everything moved correctly to where I had it placed. Not that my initial data migration is complete, I've added the parity drive and running the initial parity. Edit - for the initial data migration I was reading about having everything go to cache then let mover deal with it to speed up. So I did take everything off cache for more room. I think some of my issues were that my initial cache was 500gb and filling way before the hour. After I got the mover plugin and set it with vmstorage and also added to start it at 75% made it way better.
  12. Here are the diagnosis. nasty-diagnostics-20220821-1829.zip
  13. So it's working now after I downloaded CA Mover Tuning. Not sure why but everything is being moved from cache to the array. Would love is someone knows why it was working on reverse without that plugin.
  14. You can ignore my noobness into Unraid and the settings. I found that I have to go to settings then nerdpack and can install the packages I need. I looked under plugins and clicked nerdpack and didn't see anything, so that was my bad.
  15. I have nerdpack installed. Not seeing an additional option to install screen, I thought everything was included in the plugin.
  16. So this isn't working for me. I'm trying to use screen or tmux so that I don't have to migrate my data while keeping the terminal open. I downloaded Krusader and it's horrible, keeps failing telling me it can't write after about 100 gigs. All going to the same share.
  17. Hello I'm having an issue moving files from cache to the array. I have mover set to hourly, and it was working great with my data migration until it didn't. After it wrote the initial amount to the array, I had 330gb on the array. Now when it ran I have 178gb on my cache selected and 256 on the array. I have the share setup with Cache: Yes Device: vmstorage which is my larger ssd until I move all then my cache will be a 500gb and the 1tb will go to VM storage. My understanding is that yes prefer will keep it on the cache whole yes will initially write to it then let mover move it to the array.
  18. That's what I'm thinking, I'm looking into trying to get something and building something from scratch.
  19. How can I pull it from the server then? I just went to the syslog section and copied the text to a file. On a side note, I don't think it's Unraid at all. I downloaded memtest86 booted it up and started running it. At 1 min 20 seconds it froze just like Unraid did. I'm not sure why ADM (Asustor) works perfectly to include large transfer's, but everything else fails miserably. I can't wait to get rid of this POS hardware.
  20. So I have it running (Have to boot through the boot with file option each time, otherwise it tries to boot into ADM). I am about to pull the CMOS battery to see if that fixes the BIOS. However, now that I have it running I started the syslog server after the first crash. I now have a display connected directly to the NAS and using the terminal for the file transfer. I will upload two photos showing that but not sure how it helps. The second photo, with only one line out output from rsync, was taken after it froze. I let it run over night until the morning. The display stayed on and didn't time out like normal, and key presses did not register. I figured maybe its still running in the background or something. I let it run over night and would think that 8TB would transfer in 15 hours. Today's login showed that it wasn't running in the background as I have like 5GB more on the drives, so it failed quickly when starting rsync. syslog.txt nasty-diagnostics-20220807-0923.zip
  21. Does not appear to be the flashdrive. I just got a Samsung fit which is a recommended one and it is also not being recognized at a boot device. I am not sure why it initially worked and now it doesn't.
  22. I renamed it when flashing several times. I only tried one time for EFI (without naming it) and it didn't work either. I will update tomorrow but honestly I think it could be the Asustor 6604T. The reason I am switching from the ADM to Unraid is because the ADM and Docker would crash so that I had no access to my services that I was hosting (about 3-5 days before a crash) and it wouldn't reboot from the ADM manager, or even sshing and doing a kernel reboot mod. I would have to have physical access to the server and hold the power button for a hard shutdown which is never good.
  23. No sorry, I can only boot if I go to the boot from file option which means i have to have a keyboard and monitor plugged in at all times for when or if it crashes or a power outage takes it offline. I used to show in the actual boot menu and would remember the choice before I pulled the USB to attempt to update bios. I uploaded images in hopes that helps. I don't know if it's the flash drive or the bios that's bad. I order a Samsung fit drive that should arrive tomorrow and I'll try to format it and see if that is recognized. But since I reformated the SanDisk yesterday I wouldn't be able to pull the logs for the original problem.
  24. That is sadly easier said than done. I apparently burned it to the ground. I was trying to research similar problems and someone said ensure that you have the most updated BIOS, and the only way to do that is from inside the Asustor ADM software. So I did a stupid thing and pulled the flash drive that was booting properly so that it would boot to ADM. I also pulled all HDD so that ADM didn't overwrite what I had. Turns out you need a drive to start it, so I inserted one I didn't care about and found out my BIOS was the most up to date that crappy company has released. So I stuck my flash drive back in and it is no longer seem in the boot menu. I can select boot from file and boot that way but if it ever ever goes down I have to connect the monitor and keyboard to recover. So, I spent most of last night dealing with losing my configuration also since I was still on a trial key testing it out and rebuilding the flash drive. Then I came across a post where SanDisk isn't recommended anymore so I ordered a new one. I have also tried flashing the stick with both UEFI (originally working) and EFI modes. Nothing works anymore.
  25. Hey, I am new to Unraid and doing the trial right now. I have been struggling all day trying to migrate my data into an unraid array. I am running on an Asustor 6604T, I copied all my data off the Asustor onto a 12TB HDD (used 8TB). Now I setup Unraid and trying to migrate the data. I have tried copying it through terminal with cp to /mnt/user/data/ I have tried using mc to the same location. I have even tried copying it with rsync and every time within about 5 minutes the unraid webgui fails and I have to hold the power button down to hard shutdown and then bring it back up. Does any one have any idea? I also took out the parity drive, set up hourly data mover from the scheduler and don't know what else to do. It seems stable and runs for hours as long as its not transferring data from the external drive to the array.