Jump to content

JonathanM

Moderators
  • Posts

    16,168
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Try updating to Unraid 6.7.3 rc4 using the Next branch in the OS update under Tools. Second, no need to link to outside sites, just attach any files directly to your posts in the future.
  2. It could, if you copy directly to /mnt/cache using the console or a container. If you are doing it correctly and copy to the user share instead, the file will show as already existing and ask you if you wish to overwrite it. The correct way to use a cache : yes share is to write to /mnt/user/share if you are using the console or a container. That way you see everything that is there already, and anything you add will automatically go to the cache first and be moved later. Using the disks directly, be it disk1, disk2, cache, etc is not a good idea for someone not familiar with how unraid works, for many reasons.
  3. Since it's new, can you return the marvell based controller? Switching to an LSI chipset controller would likely solve most of your issues.
  4. If by that you mean add a cache disk and use it for the config folders, yes. Without a cache disk, I don't think there is anything you can do right now except wait for a new release or downgrade to 6.6.7
  5. Are you ok with erasing all the data that was originally on disk7? That's what format will do.
  6. Your picture doesn't show the top of the finder window, which is why the question was asked.
  7. If you connect to a public share, windows offers up credentials and they are accepted, because public. Then when you try to connect to a private share, those credentials aren't valid, so it pops up a dialog. It's a trick. Windows will NOT allow 2 different sets of credentials for the same server, so even though it asks for new credentials, the existing login to the public share is still valid, so the new credentials are just discarded by windows. You can either purge the existing credentials and logins, then force the first connection to be a private share, or you can use the IP address instead of the server name for the private share and windows will happily accept new credentials because it's stupid and doesn't know you are actually accessing the same server.
  8. Has already been addressed. Did you read the thread? This plugin fixes it. https://forums.unraid.net/topic/51959-plugin-ca-application-auto-update/
  9. Well, that would explain why it says it's too small. Delete and recreate with a more reasonable size.
  10. Wherever the host part of the path is set to. The container side lists what the docker sees, the host side shows where it is in unraid.
  11. At the unraid terminal, type these commands and post the results. du -h --apparent-size /mnt/user/domains/_shared/steam.img du -h /mnt/user/domains/_shared/steam.img
  12. See if the offending folders are still there following a reboot of the server.
  13. Show a screenshot of the VM settings page where you have the new vdisk defined.
  14. Here's my take on the situation. The sql thing has been an issue for a LONG time, but only under some very hard to pin down circumstances. The typical fix was just to be sure the sql database file was on a direct disk mapping instead of the user share fuse system. It seems to me like the sql software is too sensitive to timing, gives up and corrupts the database when the transaction takes too long. Fast forward to the 6.7.x release, and it's not just the fuse system, it's the entire array that is having performance issues. Suddenly, what was a manageable issue with sql corruption becomes an issue for anything but a direct cache mapping. So, I suspect fixing this concurrent access issue will help with the sql issue for many people as well, but I think the sql thing will ultimately require changes that are out of unraid's direct control, possibly some major changes with the database engine. The sql thing has been an issue in the background for years.
  15. Did you try setting the drive types to what they explicitly are instead of auto? The fact that some are encrypted and some not may not be auto detected, and you may need to click on each drive slot and define the format type for each drive.
  16. Not in the GUI that I'm aware of. It's easy to script though. The way I handle it is autostart the primary services VM, and run a script on array start that pings the primary services VM, and when it's responsive to pings the script runs "virsh start vmname" for the subsequent VM's. You can add delays or dependencies to your hearts content in the script.
  17. So you wait 3 hours to snipe my post?🤣
  18. While this behaviour is a little unsettling, and possibly should be modified, I can see why it probably happened. Parity is built walking the disk sectors in order. Since the disk you pulled was only a 500GB, and you were building to 6TB of parity, the disk was already not being read because the parity build process was beyond the 500GB mark. Its contents (and all the other drives to that point) were already written to parity and being emulated by the rest of the drives. I suspect if you had read file contents or written to that drive, it would have immediately failed and warned you, but since all you did was browse the TOC that was most likely in RAM, no activity was actually asked of the disk, so unraid didn't know it was missing. Since one of the main selling points of unraid is the ability to keep some drives spun down if they are not needed, I think some effort has been put into not actively poking drives just to be sure they are there. Unraid only fails a drive when a write to it errors out, so it's quite conceivable that a failed drive could hang out in the array for some time without being noticed. Regularly scheduled parity checks (typically monthly) are a way to be sure little used drives are still capable of participating in a rebuild should they be called into action.
  19. Watch this and see if it helps clear up things. https://www.youtube.com/watch?v=ij8AOEF1pTU
  20. A GUI function for this would be great, I would definitely use it, specifically to help with start and stop routines. I have certain groups of containers that are dependent on others. However, for right now, it's easy to set groups up manually using scripts, which can be handily dealt with in the user scripts plugin. Here's a ferintstance. You could save this as a script to start a media gather operation. Save another script substituting docker stop <container> to shut them all down again with one click. If you really want to get fancy, you can link a script to an icon in your OS, by using plink. Here's a windows command to launch a script saved in the user scripts plugin in your unraid. Assumes you have putty installed. c:\<path to putty>\plink -pw <password> root@<unraid ip> /boot/config/plugins/user.scripts/scripts/<nameofscript>/script #!/bin/bash docker start binhex-nzbget docker start binhex-delugevpn sleep 30 docker start binhex-sonarr docker start binhex-radarr
  21. Wow. Just curious, what was your motivation for posting this question? 1. Trying to be funny? 2. Too lazy to read the FAQ? 3. Honestly didn't understand what you read in the FAQ? If the answer is 3, I am truly sorry for my attitude here, and apologize profusely. What part of the FAQ explanation needs more work?
  22. Not just match, many models require swapping a chip from the old board. https://www.donordrives.com/pcb-replacement-guide
×
×
  • Create New...