• Posts

  • Joined

  • Last visited

  • Days Won


Frank1940 last won the day on September 3

Frank1940 had the most liked content!



  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Frank1940's Achievements


Mentor (12/14)



  1. @JQNE, When you put the flash drive in to the other computer to update, make a complete copy of the contents of that drive before you do anything else. You will need files and folders from the config folder to preserve the present configuration of your server. Plus, a backup is never needed when you have one!!! In the past you could just copy over the bz* files in the root of the zip file to do an upgrade. I know this is still required but I am not sure that is all that is required today.
  2. Did you setup Notifications? Is that working? @trurl mentioned that you were on 6.8.3. Does this mean that you have been hesitant about doing any upgrades? The reason that I am asking is that the addresses are hardcoded into each app. Somewhere in the back of my now hazy mind is the thought that these addresses were changed a while back...
  3. I almost hate to jump in here but here goes. I am going to give a set of instructions to perform. The reason for this is that I (and anyone else who is looking at this) are positive about what is being done and what the results are.) Open the GUI terminal window (the >_ icon on the toolbar at the right side of the GUI). Now type the following command: ping You should get something like this: A <CTRL-C> will stop this process. If you get an error message, please get a screen shot of it and post that in a new posting. Second thing to do IF you get a failure. Type this command into that Terminal: ping Again capture a screenshot of that if it fails.
  4. I hope this will be a selectable setting. I check my cache drive a couple of times a week to make sure that everything got moved. If something didn't get moved, I know (most likely) that there is a duplicate file. I can then investigate and perform the remedy. (The couple of times it has happened, it has been simple to run down the file chain and find the cause.)
  5. from the Unraid terminal (Icon on Toolbar), type the following: ping Use a <CTRL-C> to terminate the process. IF that works, type the following: ping If first command works, you have Internet access. If the second command doesn't work, you do not have a DNS server that is accessible from the Unraid server.
  6. As I recall, it is recommended that the appdata share be restricted to the cache drive. (Apparently, the system is more responsive if things are setup that way.) I believe that if the share is restricted to the cache drive, Mover will actually move everything back to that drive. (Turn on the 'Help' system to see what the choices are. I think the choice you want is 'Prefer' or 'Only'.) This will prevent the problem (at least, a portion of it) that you have run into. If you only have a single cache drive, the appdata share will not be protected. There is a plugin which will back this share up to the array called CA Appdata Backup/Restore. EDIT: you do have to be careful that you only map things to this folder that are actually used as 'settings' for the Dockers that you are using. If a Docker creates actual data files, map those data files to the proper share where they are used.
  7. What you now have to do is to try to figure what the circumstances were regarding each file in question. I would assume that a modified file being rewritten to the array would force a write directly to the array which would overwrite the existing file. Perhaps, and the following is all conjecture on my part, the file was open by two processes and when the write to the array failed, it wrote the file to the cache drive-- Checking the dates and times might provide some clues here. Note of disclosure: I use a 'system' to prevent ransomware Malware from having direct write access to my array. The system occasionally results in the above condition--- usually a bit of stupidity on my part. You can read about it here if you are interested:
  8. One possibility that I know of is that Mover will refuse to 'move' a file to the array if the file already exists on the array.
  9. This discrepancy is usually caused by the overhead for the File system...
  10. Either @RAP2 (or someone else) asked about how to find the storage area used on each hard disk and the suggestion that came back said to 'click' on the "Compute" in the Share tab. So being adventurous (and curious), I did so on a couple of shares. The first one was the 'Media' share on the Test Bed server. That share is very small probably has not more than thirty or forty files in it. It calculate within a second after the disks got spun up. Thinking, "Gee, that was fun", I decided to try the Backup share. Now, that Share contains (among other things), Sixty plus months of client-computers back up with some 9000 files in each backup! It did appear to lock up my GUI--- I finally just walked away and let it run. A couple of hours later, I came back and it had finished and showed the GB used for the share and the GB used on each on the data drives. What I suspect is happening is that this process is not running in the background as far as the GUI is concerned. I can easily see where it would seem to work perfectly in the development environment and bog down the whole server when used in the real world on a working server. (I know that I am never going to click on it again!!! 😉 😈 )
  11. Think carefully about only moving up to 4TB drive. All of your drives are pretty full and if you go with a 4TB parity drive, you will be paying a lot of money of each additional TB that you add in the future. Seriously consider moving to at least a 6TB drive. The sweet spot may will probably be north of 8TB at this time. (I haven't been really tracking to see where it currently is.) The only disadvantage is the increase in parity check time but there is now a plugin to pause these checks when you expect the array to be in use.
  12. While @Batter Pudding and I were discussing secure Unraid and Windows file sharing, I brought up the topic of Peer–to–Peer Windows 10 secure file sharing. He provided me with a very basic guide of how to do this. I took the time to assemble his information into a PDF file. I then had a debate with myself about what to do with this information. One line of thought said to post it up in a new thread in this subforum. Another line said that it should not be posted at all for several reasons: Windows 10 Peer–to–Peer secure sharing is not really an Unraid issue. Windows 10 Home does not really support secure Peer–to–Peer file sharing. Windows 10 Pro is the entry level version that has enough networking tools to allow a clean user interface and secured Peer–to–Peer file sharing. Windows 10 Peer–to–Peer secure filing sharing does not seem to be exactly the environment that Microsoft intends SMB to be actually deployed in 2021. It is a very small part of a much larger networking universe that usually requires a trained/schooled professional to setup and maintain. It can become very easy to find yourself in over your head as one tries to add just one more desired feature. (It would seem that they intend Peer–to–Peer to be used in a home environment in an unsecured manner. This is all Window 10 Home basically supports!) However, there are Unraid users running Windows 10 PRO who are interested in a secure environment in their home networking setup. This information will help you to achieve this goal. Plus, there are probably a few users who have very small businesses that have only a few networked computers that they would like to secure without the expense of an IT consultant. A person with above-average technical skills will probably be able to get a workable setup deployed. Should you find yourself getting in over your head, you will probably have acquired enough knowledge to conduct meaningful interviews to select an IT professional that you can work with. As you can tell, after several months of procrastination, I decided to post up the PDF. Make use of it as you will. One final thought, we created a user, Charlie, in the PDF. Remember Charlie is just the name of a set of rules that any user who is logged in to the server computer as Charlie will be following! In a typical home environment, I might set up two users– Parent and Kid. (Obliviously, the Parent user has different privileges than the Kid user!) Each client computer (or user profile) would log onto the Peer-to-Peer server(s) as one of those two users. Furthermore, there can be more than one client computer logged in using either of these profiles since the actual user name on the server is client-computer-name\user-name! The same line of thought could be applied to a small business where two of the user profiles might be Owner and Cashier. Windows 10 Peer–to–Peer File Sharing Guide.pdf
  13. Sometimes replacing a drive results in the BIOS re-configuring itself to boot from the new drive. Another (remote) possibility is that that 10TB has failed and the failure prevents the MB from posting.
  14. As I read over what you intend to do, I can see only one flaw in your thinking. If you have a parity disk, that will be a real, real choke-point. IF you want to do this, you will have to unassign the parity disk until you are finished. Then reassign the parity disk and rebuild parity. It sounds like you will have a backup of the data on the other server so this should not be a problem. Writing directly to a disk share is not a "NO, NO". What is a problem is doing file operations between disk shares and array shares. That type of operation will eventually cause a loss of data!!! (Bit of clarification, File Operations that involve disk share to disk share are OK. File operations between shares on the array are not a problem.)