TheSquigglyline

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TheSquigglyline's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm talking about my data + parity drives. My Cache Drives are not being upgraded... yet
  2. Long story short i'm replacing all the hard drives in my server. Currently I have 4x 1tb 2.5" SAS drives for a pool of 3.7tb. In transit right now is 5x 6tb 3.5 SAS drives. So here is the big question. What is the best way to transfer all my data. My first thought is that because a single dirve is bigger than the entire old pool i should mount one drive not in the pool and copy all the data over to it. Then create a new pool out of the 4 remain 6tb drives. Then transfer the data from the drive not in the pool to the new pool and then format and add the last drive to the new pool once transfer is done. To me, this method sounds the best. Fastest, least number of IOps, and less plugging and unplugging. I wouldn't clear the old 4x 1tb 2.5" SAS drives so that if I did have a corruption when transferring all the data on to & off of the new 6tb drive I would have a backup. Does this method sound sane? Would it be smarter to just replace one drive at a time and rebuild the array? Am I forgetting something? One concern I can think of right away is that after moving all that data to the new pool that in the process of moving it. Some "links" are broken for my dockers/VM that make them not find their data and would require a lot of work to fix. Is this a reasonable concern?
  3. So i currently am running Unraid on a Dell R620 (2x E5-2670 Xeon, 96GB RAM) and now that I have had my server up for a few years i'm am running into storage issues. The Big problem is that the R620 only takes 8 2.5in Drives. Buying a system that takes 2.5in drives was a mistake. It is expensive as all hell to buy storage. Regardless i only have 1 slot left and drives over 1tb are not financially viable in 2.5 form factor. So i need a new solution. I would like to build an JBOD Drive enclosure. Currently all my drives are running on a flashed Dell Perc H710. I own a Dell Perc H810 that is currently not installed in the Server. I'm not really looking for step by step instructions or anything too detailed. I'm more looking to be pointed in the right directions. Is it possible for me to add drives to my existing array that are physically external to my R620, say in a JBOD enclosure while still experiencing the same r/w speeds as the internal drives? Can this be done with my Dell perc h810 or do i need some other hardware? How do i get unraid to recognize these drives as connected to the server so i can add them to the array? Any advice or links to resources or help would be greatly appreciated.
  4. I am receiving lots of the following errors in my syslog and FixComonProblems told me i am having Machine Check Events. Oct 8 21:30:41 Tower kernel: EDAC sbridge MC1: HANDLING MCE MEMORY ERROR Oct 8 21:30:41 Tower kernel: EDAC sbridge MC1: CPU 1: Machine Check Event: 0 Bank 8: 8c00004d000800c0 Oct 8 21:30:41 Tower kernel: EDAC sbridge MC1: TSC 220fe0ec17a126 Oct 8 21:30:41 Tower kernel: EDAC sbridge MC1: ADDR 1757c55000 Oct 8 21:30:41 Tower kernel: EDAC sbridge MC1: MISC 908400200021a8c Oct 8 21:30:41 Tower kernel: EDAC sbridge MC1: PROCESSOR 0:206d7 TIME 1633746641 SOCKET 1 APIC 20 Oct 8 21:30:41 Tower kernel: EDAC MC1: 1 CE memory scrubbing error on CPU_SrcID#1_Ha#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x1757c55 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0008:00c0 socket:1 ha:0 channel_mask:3 rank:1) Does this error mean i have a bad stick of RAM in CPU 0 Slot 0?
  5. Ok thank you again for the help. i will check out rsync and midnight commander. I set all the shares that were prefer to yes and initiated the mover. I will report back what happens. I also stopped my download files and plex media files from using the cache since use ram for transcoding anyway. That should stop the cache from ever filling up. As far as my docker.img file i get warnings that is is close to filling up sometimes and i have it set to 40gb i will have to look further into that. I suspect is is something with a game server i have running. Here's to hoping i don't have to do much if any docker reconfiguring.
  6. Bummer. Ok. Any idea how this happened. Would this likely have occurred because the cache drive got to full? Its only about a year old SSD. No smart errors on it ever. This is just going to be a lot of work and I would like to avoid future issues. Follow up. Can i just set all my shares to "use cache: no" and let the mover take care of "backing up the cache" or would you recommend moving them myself. Will the mover even work because it will try and delete the content after copying it and that will fail cuz read only. Thanks for the speedy reply!!!
  7. I logged onto my server after one of my nextcloud users reported upload errors to find that my docker service was stopped. Upon trying to start it I was met with a "Docker service unable to start" message. After running fix common problems it found that my cache drive was read only. I made sure the docker service was off, restarted the whole server, came back the next day due to not enough time in the day, when I checked back in the cache drive was writable again and the error was gone in Fix common problems. before starting the docker service I searched the forums a bit more and came across this (https://forums.unraid.net/topic/69187-cache-marked-as-read-only-or-full-solved/), post and attempted to run the BTRFS command btrfs balance start -dusage=75 /mnt/cache mentioned in this post(https://forums.unraid.net/topic/62230-out-of-space-errors-on-cache-drive/?tab=comments#comment-610551) linked from the previous. I did have to start smaller on the dusage=. first at 50 then up to 75 slowly. i also ran btrfs filesystem usage /mnt/cache and my cache didn't show full. So I started up docker again. And it all fell apart. Dockers wouldn't starts with error that cache drive is read only. Via winSCP i tried to create a directory on the cache and got General failure (server should provide error description). Error code: 4 Error message from server: Failure Common reasons for the Error code 4 are: - Renaming a file to a name of already existing file. - Creating a directory that already exists. - Moving a remote file to a different filesystem (HDD). - Uploading a file to a full filesystem (HDD). - Exceeding a user disk quota. I really don't want to have to format my cache drive. I also don't know enough to know what deleting my docker.img file would do to my existing dockers. If i backed up my appdata folder will i be good? Will i have to re download and re configure all my dockers? I have about 118gb used on my 500gb ssd cache so moving it to the array and back wouldn't be terrible but wouldn't be the most fun. How did this happen. I did receive and error that my docker drive was near full. Is that the cause. I did expand it but only after these issues arose. Is it an issue of my cache drive getting too full. Sometimes when I am DLing new media for the Plex server the cache will get quite full before moving it that night. Attached is a copy of my diagnostics. I am unfortunetly not technical enough to solve this issue on my own. Any help would be amazing. Thank you in advance! tower-diagnostics-20210617-1209.zip
  8. Greetings, I have two custom docker networks. One is the normal br0 network and the other is one setup for reverse proxy called proxynet. Proxynet is setup following SpaceInvaderOne's videos about reverse proxy. So I have my home assistant docker running on the reverse proxy so that I can access my dashboard from out of my home network. But i have my Ubiquiti Unifi docker on by Br0 network because i want it to have its own IP, I don't want it exposed to the web, and so that it can operate correctly by finding my APs. So I went to integrate home assistant and the ubiquiti controller but i can't because the Home assistant docker cannot communicate with the ubiquiti controller docker. Can someone give me some help understanding the problem in greater detail and some possible solutions. Networking is not my strong suite and but i'm leaning. Thanks in advance!
  9. I as using steam credentials. I just did a fresh install of the docker/steam-cmd/ark-se. here is my log ---Checking if UID: 99 matches user--- usermod: no changes ---Checking if GID: 100 matches user--- usermod: no changes ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Starting...--- ---Update SteamCMD--- Redirecting stderr to '/serverdata/Steam/logs/stderr.txt' [ 0%] Checking for available updates... [----] Verifying installation... Steam Console Client (c) Valve Corporation -- type 'quit' to exit -- Loading Steam API...Warning: failed to init SDL thread priority manager: SDL not found OK. Connecting anonymously to Steam Public...Loaded client id: 7190745870286915205 Listening for IPv4 broadcast on: 27036 Logged in OK Waiting for user info...OK ---Update Server--- Redirecting stderr to '/serverdata/Steam/logs/stderr.txt' [ 0%] Checking for available updates... [----] Verifying installation... Steam Console Client (c) Valve Corporation -- type 'quit' to exit -- Loading Steam API...Warning: failed to init SDL thread priority manager: SDL not found OK. Connecting anonymously to Steam Public...Loaded client id: 7190745870286915205 Listening for IPv4 broadcast on: 27036 Logged in OK Waiting for user info...OK Success! App '376030' already up to date. ---Prepare Server--- ---Server ready--- ---Start Server--- [S_API FAIL] SteamAPI_Init() failed; SteamAPI_IsSteamRunning() failed. Setting breakpad minidump AppID = 346110 I am changing the ark game server settings to match my desired server. Things like xp multipliers and tame timers. Things i should be able to change. https://ark.gamepedia.com/Server_Configuration Things changed in the config: network type is br:0, console shell is bash, steamcmd and ark files file location changed to ark share stored on cache. That is all I have changed. Didn't enter in anything else that is blank.
  10. ---Checking if UID: 99 matches user--- ---Checking if GID: 100 matches user--- ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Starting...--- ---Update SteamCMD--- Redirecting stderr to '/serverdata/Steam/logs/stderr.txt' [ 0%] Checking for available updates... [----] Verifying installation... Steam Console Client (c) Valve Corporation -- type 'quit' to exit -- Loading Steam API...Warning: failed to init SDL thread priority manager: SDL not found OK. Logging in user 'XXXXXXXXXXXX' to Steam Public ... Generated client id: XXXXXXXXX Listening for IPv4 broadcast on: 27036 Listening for connections on: 0.0.0.0:27036 Received broadcast message from client XXXXXXXXXXXXX (DESKTOP-IAK4KNQ): xx.x.x.xx:27036 Logged in OK Waiting for user info...OK ---Update Server--- ---Validating installation--- Redirecting stderr to '/serverdata/Steam/logs/stderr.txt' [ 0%] Checking for available updates... [----] Verifying installation... Steam Console Client (c) Valve Corporation -- type 'quit' to exit -- Loading Steam API...Warning: failed to init SDL thread priority manager: SDL not found OK. Logging in user 'XXXXXXXXXXXXXXXXXXXX' to Steam Public ... Loaded client id: XXXXXXXXXXXXXXXXXXX Listening for IPv4 broadcast on: 27036 Listening for connections on: 0.0.0.0:27036 Logged in OK Waiting for user info...OK Update state (0x0) : Timed out waiting for update to start, bailing. Error! App '376030' state is 0x204 after update job. ---Prepare Server--- ---Server ready--- ---Start Server--- [S_API FAIL] SteamAPI_Init() failed; SteamAPI_IsSteamRunning() failed. Been fighting getting and ark server up and running for hours now. When i fresh install it shows up in steam servers but as soon as i change O:\ark-se\ShooterGame\Config\defaultgame.ini of defaultgameusersettings.ini it stops showing up and never works again. Here is the latest log
  11. I am hosting a couple game servers, have a pihole, am setting up plex and home assistant on my unraid machine. Right now just the pihole, minecraft server, and ark server are running. During setup and testing i had the two server's network type set to host and the Pihole set to br0. Everything was working fine. I was able to access all of them. Just now i changed the two server to br0 as well and issues arose. The pihole is at x.x.x.2, minecraft at x.x.x.20, and ark x.x.x.21. I am still able to access the pihole web server and ping x.x.x.20 & 21 but i cant access the minecraft web server anymore and cannot connect to either server via the game. I have a dell r620xl with 4 ethernet ports w/ two connected to my network. Is my understanding of how br0 is supposed to work wrong or is there an issue? i'm pretty new to networking but these seemed like it was supposed to work. Let me know if there is any additional information you may need. Thanks for the help!!!
  12. How do I view the log files so i know what the docker is doing?