Jump to content

JonathanM

Moderators
  • Posts

    16,723
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Likely a read failure that was successfully rewritten when it was detected. Diagnostics should show what happened. If you reboot the error will reset, as well as losing any evidence of what happened if you don't download diagnostics before rebooting.
  2. Until you change the hardware path and figure out which part is responsible for the drive dropping it's not productive to keep doing the same thing over and over.
  3. The issue here is that Unraid runs in RAM, so "normal" locations for storage that work for 99.9% of the linux world by default will either be in RAM or otherwise not valid. You must manually assign storage locations to valid spots, which can vary based on how each specific Unraid setup has been customized.
  4. Probably not going to happen, since they provide the https://(long random string) unraid domain access for people who don't have the knowledge to set up https properly. It's assumed if you are overriding the default Limetech provided https address that you know how to manually keep things working.
  5. Personally I use the SWAG container to host sites and reverse proxy what I need to expose to the internet. It handles all the renewals and security internally. I don't like the idea of allowing the built in instance of nginx that hosts the webGUI to be on the front line exposed to the internet, if something crashes it things get difficult. I know Limetech is moving forward with properly securing it, but I'm still of the opinion that the management access for Unraid should be behind another layer of security, preferably VPN.
  6. That's not a good answer. Please clarify, as there are consequences for using locations that aren't appropriate for logging. Runaway docker image size, running out of RAM and crashing, etc. Since you are the one recommending the changes, YOU need to know the details about what you are telling people to change, so you can let them know of the possible consequences and how to deal with them if things go wrong. Telling people to blindly follow your directions when you don't know the possible issues of what you are telling them to do is not good.
  7. It may be a valid link, but it's not what was desired. Attach the diagnostics zip file to your next post in this thread. Clicking on random links to download an unknown file isn't ideal.
  8. This. Zero is the only acceptable number of errors.
  9. That will result in an empty fresh filesystem, that you will need to repopulate from your external backups. Is that what you intend to do?
  10. This should have been posted in the support thread for DelugeVPN, but the quick answer is that you need to add your VPN subnet into the list of allowed addresses. Click on the container icon in the Unraid GUI, and select the support option, it will take you to the correct place.
  11. Do you have the Krusader specific recycle bin enabled? BTW, this post really should be in the support thread specifically for the container, you can get there by clicking on the support link on the dropdown from the container's icon in the webgui.
  12. You can see the current controllers being officially manufactured by (for) LSI on their page, https://www.broadcom.com/products/storage/host-bus-adapters/tab-12gb1 If it's not there, and it's being sold as new out of china, it's counterfeit. For older generation controllers you need to look at ebay sellers that specialize in parting out old servers. There is no such thing as a cheap new official LSI controller. They sell to the server market, with prices to match.
  13. This addresses my only concern when I see someone asking about Unraid in a business setting. My normal answer is, Unraid is great for a business ONLY IF THE PERSON SETTING UP AND MANAGING THE SERVER IS AN OWNER OR SIMILAR PRINCIPAL IN THE BUSINESS. If you are an employee in a business that could conceivably move on in the future and the business would continue, don't try to sell the owner on Unraid. Either you will get stuck supporting the server after you leave, or the server will get neglected and lose data. Either way, it's not a good look for Unraid. If you are an IT company selling AND FULLY SUPPORTING Unraid for your customers, then great, just be sure to let the business principals know what is needed to maintain Unraid if your company is no longer capable or decides to move on. Unraid is great, and really not a problem to maintain if you have the skillset, but calling geeksquad (or insert your local generic windows pc shop) isn't going to work. Somebody familiar with Unraid must be involved in the ongoing care and feeding of the server, or it will become a liability.
  14. Are you clear on how to follow trurl's instructions? That is by far the fastest way to get it done, assuming you do exactly as he said and not interpret his instructions to something else because you think it's better for some reason. I'll restate step by step. 1. stop array 2. unassign parity drive 3. start array (may not be necessary, can't remember) 4. stop array 5. assign former parity drive as data drive in new slot 6. start array 7. format newly assigned (or copied) drive as XFS, make sure it's the only drive showing as unmountable before you hit format, and be sure it says it's using XFS 8. copy (NOT MOVE) all the data from the largest remaining ReiserFS drive to the newly formatted XFS drive 9. verify the copy completed successfully, full compare of files if you wish 10. stop array 11. select drive that was the source of the last copy and change it to XFS 12. are there any ReiserFS drives left? If yes, go to step 6. if no, continue 13. assuming everything went well, you should be left with 12 data drives with XFS content, and the smallest drive should be fully copied and ready to be removed 14. Power off and physically swap the last copy source drive with a new 18TB drive, be sure to note the serial number 15. go to tools and set a new config, preserve all 16. assign newly inserted 18TB drive as parity, rearrange data drives however you want, just don't put a data drive in a parity slot by mistake. BE VERY CAREFUL ABOUT THAT!!!! 17. start array and build parity 18. do a correcting parity check to be sure everything is happy in the new config. zero errors is the only acceptable answer. The copy and verify steps are up to you how to accomplish them. I personally use rsync at the local console. If you insist on using a move instead of a copy action, 1. there is no way to verify the data moved properly 2. it will take roughly twice as long, possibly 3x or longer depending on the state of your ReiserFS file system.
  15. Maybe I'm just not getting it, but first you say Then you say Perhaps it would be clearer to outline your current state and then your desired end goal Like this Current 18TB parity disk1 4TB BTRFS 3.9TB disk2 4TB BTRFS 3.5TB Desired 18TB parity disk1 18TB XFS with data from former disk1 and disk2 Then maybe we can help formulate the fastest way to get things done.
  16. So where is this replacement log writing? RAM? docker image file? mapped config folder?
  17. As itimpi said, why? I personally have 37 containers, all sharing the host's IP, answering on different ports. I find it much easier to manage that way, as I know which physical machine is responsible for the traffic and services. Besides, macvlan, the service that enables separate IP addresses for containers, is notoriously buggy for some people. It's not like you will run out of unique ports, with 60K+ available on each IP.
  18. What are you planning on doing with Unraid? The amount of storage you will need can vary wildly, so the only response to an open ended question of how big, is buy the largest you can afford.
  19. And I can't seem to find where that was mentioned in this thread. What are you referring to?
  20. Don't bother with docker.img, it's recreated from scratch easily by using the previous apps function of CA. Focus on the appdata config folders for your containers instead.
  21. He told you what to do in the post you replied to. Change the HOST port to 8119 and leave the container port as 8118.
  22. User shares are automatically created for every root folder on every array disk or pool.
  23. If it's not on a physical drive, it's in RAM, which probably explains the out of memory errors.
×
×
  • Create New...