SNDS

Members
  • Posts

    38
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SNDS's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. @itimpiYes, but this was also feedback for the Unraid Connect keys page which is broken.
  2. I have a dormant key I want to move to a new USB stick and I cannot download the key, at all, via Unraid Connect. It opens an about:blank tab and nothing happens. Can anyone help?
  3. Can anyone help me with this error? I just installed the Docker, setup the URL, etc and when I get the web UI and press "Go to account dashboard" I get the following. I have migrated my Franz account to Ferdium and am actively using the app on desktop, but wanted to see about self-hosting.
  4. I'd love to hear if there has been an update for this. DNS rebinding makes the Unifi hardware virtually useless if I want to expose any services externally. Plex uses its own secure tunnel of some kind, but if I want to use anything for Plex requests like Overseerr and not want to expose my entire server to the internet via port forwarding, I need to be able to fix this rebinding problem. @casperse any thoughts on this?
  5. Perfect, thanks for the context and help!
  6. I guess that's my question, can I just make any container path in the mappings? Does it NEED to be /data?
  7. Is there any way to setup the docker to allow for media coming from multiple different shares? I'm breaking up my media to keep it a bit more organized. Every time I try to use the same /data container path (which I assume is fixed and I can't create a new container path manually) for a new share it fails.
  8. Has anyone noticed if Backblaze has any upper storage limit? I have ~50TB and more ongoing that I'd like to be able to backup offsite.
  9. I'm not actually a customer. I'm looking into getting a backup service or solution. Crashplan was an initial recommendation, but I'd like to avoid service cut off as my storage needs grow.
  10. I have almost 50TB and that is growing with my personal work files (digital design/photography/videography assets and project files), dvd, and br backups it’s difficult to setup that much storage offsite affordably.
  11. I’d like to know this as well. My alternative, currently, is working with family members to create replicated servers offsite and use an rclone docker service to automatically mirror my drives to the other server(s).
  12. Semi-related: has anyone had trouble with CrashPlan Pro backing up large volumes of files? I have approx 50TB that I want to backup.
  13. So...I have a feeling the kernel panic was due to the upgrade to 6.8.1. For my part only 6.8 was stable. I say this because when I remade the USB stick with the old backup (6.7.2) and used the config files from the original stick, it booted up with the correct configuration and everything. I don't know if the guys at Unraid are aware of this issue but I won't be migrating to 6.8 until this gets fixed. All in all, thanks @PeteAsking. The config migration solution worked. Not sure why I didn't think of that. Much appreciated. I was freaking out a bit, I'll be honest.
  14. EDIT: @PeteAsking helped me out here as his solution of copying the original USB stick config folder allowed me to use the old 6.7.2 backup I had on the new stick, however it would seem that the kernel panic I was having MAY be due to upgrading to 6.8.1. I don't know if the Unraid devs will see this but please check on the following error in relation to 6.8.1 "RIP 0010 kernfs_name_hash+0x9/0x6d." --- Hey folks, Before you tell me I'm an idiot, I know that I screwed up, I just need some help. I was trying to migrate my Unraid install to another USB drive as the one I used originally sticks out and could easily be damaged. In the midst of me trying to do so, when I attempted to boot to the original USB drive prior to migration I got the following kernel panic error "RIP 0010 kernfs_name_hash+0x9/0x6d." From here I couldn't figure out how to get the server to boot despite trying a safe mode boot. I had an old drive backup that was accessible so I decided to try and get another license to map to the new USB stick because I couldn't migrate the previous key and use the old backup with the new key. This and using the old backup worked, however, the array configuration is old. Some back history, I recently updated my drive configuration to have 2 new larger 12TB parity drives, I moved the previous 10TB parity drives into the main drive storage pool. I did this many days ago before upgrading the OS to 6.8.1. It worked fine in that configuration, but now, when I am booted into the server it still thinks the old 10TB drives are still the parity drives. Is there a way to override the array configuration? Or is there a way to fix the original USB stick so it doesn't get a kernel panic? Can someone help me? I'm a layman trying to get things back to the status quo here to no avail.
  15. Hey folks, A couple of things: I did some upgrades including new drives (they show up fine in my LSI controllers) and new RAM (capacities are different but speeds are identical and register correctly in the BIOS) - currently running MemTestx86 Initially I got the following error: RIP: 0010:panic+0x1e6/0x227; I could not find any results for this exact error on the forums or otherwise (Google, et al) but similar problems recommended blowing out any ram slots that were previously empty and reseat RAM - DONE Additional recommendations included using a USB 2.0 stick instead of a 3.1 stick for reliability so I started with a fresh new Unraid install on a 2.0 stick (my MOBO only has 3.0 or a c-based 3.1 port so I can't use 2.0 ports) and now get THIS error: RIP: 0010:panic+0x1e3/0x224 I am now at a loss for troubleshooting. Fresh install, RAM is visible to the system, reseated it and blew dust out of sockets, running MemTestx86 to see if anything is up. Any advice here would help. I don't know how to get a log when the error is at boot. If anyone can guide me this would be wonderful. I'll include results of the memtest once its complete.