Jump to content

trurl

Moderators
  • Posts

    44,063
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  2. Compare the files at /mnt/cache/system and /mnt/disk1/system. It should be using those on cache. In any case that isn't likely the reason for your problems, just something to make the general docker setup work better. If docker is using files on the array, those disks will keep spinning, and dockers will have performance impacted by slower array. Not sure about the downloads stalling. Maybe try deleting and recreating docker.img:
  3. Did you ever reboot? Are user shares accessible now? I'm not sure writing files to /mnt/user will actually do anything that will survive reboot. Writing a folder to /mnt/user will result in creating a user share named for the folder and any contents would probably wind up on disk1 or whichever disk was in line for highwater. Not sure what would happen if you deleted a folder from /mnt/user or overwrote /mnt/user with nothing. Maybe it would just break user shares rather than going through all the disks and deleting everything. Not going to try it😉
  4. The mappings aren't the only thing you need to consider though. It is the paths specified in the application, and whether or not they correspond to mapped storage. Common mistakes are specifying a path in the application that doesn't exactly match a container path in upper/lower case, and specifying a relative path in the application (what is it relative to?) I don't use tautulli but from looking at it I assume it only writes to appdata. The main ways plex will write to a path other than appdata are transcoding and DVR if you use either of those features. The new "skip intro" feature also seems to be a problem for some.
  5. Just thought I should explain the consequences of this since you didn't comment on it. system share is where docker.img and libvirt.img normally lives. If dockers or VMs are enabled, these files will be open, and if those files are on the array, array disks will be kept spinning, and also the decreased performance resulting from parity update.
  6. My sister and niece stream from my plex server using their Rokus from half-way across the country. They don't know anything at all about my files, my server, or my network.
  7. Most of us have our media on our Unraid server, and run a plex server docker on that same server. Then any plex client can stream that media without even having to know anything about the files on the Unraid server. Certainly see no reason to get sftp involved in this or even SMB.
  8. are connection issues. You can acknowledge those by clicking on the warning on the Dashboard and it won't warn again until they increase. The disk you are rebuilding looks fine and only had 11 CRC so I wouldn't worry about it. No reason to rebuild unless it was disabled but let it continue. Unlikely related to any problem, but your system share has files on the array instead of all on cache. Syslog resets on reboot so those diagnostics don't tell us anything about what happened prior. Setup Syslog Server so syslog will be saved somewhere you can get it and post it when you have to reboot again.
  9. After recreating docker.img, no container images will exist. It would be better to see the exact error you got rather than your interpretation of it. Going back to your original post This is obviously lacking in details. My guess is you did something wrong reinstalling your dockers at that point, and reinstalling them with the Previous Apps feature just did it over again like you had previously, so also wrong, Probably best to start over and try to get a single docker setup correctly and go from there. Is plex working?
  10. Your general docker setup is almost perfect, except system share has some files on the array. Do you actually have any VMs? Do you know how to use the builtin Midnight Commander file explorer (mc from the command line)?
  11. Parity is realtime, cache is not. In fact, there is no requirement for cache to get flushed to the array. Cache is just additional (and differently managed) storage. One of the features of Unraid is the Mover, which will move files from cache to array for those shares set to cache-yes, and it can even move files from array to cache for those shares set to cache-prefer. Or Mover will ignore files for cache-no or cache-only shares. Mover runs on schedule, default is daily in the middle of the night. It is typical to set things up so some things stay on faster cache, such as dockers and VMs. Files for a user share on cache are part of the user share just as files on the array are. Files typically will not exist on cache and array at the same time, though you could force this by working directly with the disks, in which case, files on cache have priority when accessing the user share. Here are some more details about the various cache settings for each user share:
  12. And no doubt was used in that video.
  13. Various btrfs raid configurations are available in the pool(s).
  14. Just trying to see if something about one of your plugins might be causing this.
  15. No, it is a unique identifier, often referred to as the GUID. It is built in to the USB drive and is unique to that specific USB drive. Or, at least, it is required to be unique in order to have an Unraid license associated with it. If you were able to get a license then it must be OK.
  16. I am running latest beta, so I have 2x500GB SSD as cache pool, and 256GB NVMe as "fast" pool.
  17. Due to parity updates, it will always be slower than single disk speed for writing to the array. Same as single disk speed for reading from the array. But, since each disk is an independent filesystem, each disk can be read independently on any linux. If you lose more disks than parity can recover, you haven't lost everything as in RAID. Also, since each disk is an independent filesystem, you can mix different sized disks in the array, and easily add disks without rebuilding the array, and easily replace disks with larger disks. Unraid is not RAID for good reasons. You trade speed for these other benefits. Also, as seen in that video, faster storage is available in cache pool. And recent betas support multiple fast pools.
  18. Unraid does not run from a USB drive. The USB drive contains the archive of the OS. The archive is unpacked fresh at each boot into RAM, and the OS runs completely in RAM. Think of it as firmware except easy to upgrade with no risk of bricking. In addition to the OS archives, the USB drive stores configuration settings so they can be reapplied at boot, but the USB drive isn't really used much and doesn't require speed or much capacity. There is no other way to install Unraid. The GUID of the USB drive is required for licensing.
  19. I just sort of skipped through that video without any audio just to find what I suspected at 5:24. All of the storage is in cache pool, which is btrfs raid, not the Unraid parity array. So he isn't doing
  20. Parity and cache are not required. There must be at least one data disk in the array in order to start the system. Since you don't seem to be that interested in the NAS functionality of Unraid, you might put that HDD as disk1 in the array, and the SSD as cache. It doesn't have to be used to cache anything, it is just a way to get Unraid to manage that disk for you. Then you would setup dockers/VMs to use that cache disk. The usual (cache-prefer) shares for that are appdata (docker working storage), domains (vdisks for VMs), system (docker.img for docker executables and libvirt.img for VM setup). Dockers and VMs would also have access to that disk1 HDD.
  21. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  22. Might even be a good idea to shutdown until further advice arrives. I have some ideas but would rather wait on @JorgeB
  23. Have you tried after booting in SAFE mode?
  24. should only be used on specific user shares, mostly because appdata has the permissions required by each container. There is a Docker Safe New Permissions which will do everything except appdata. I don't use this application. Presumably you could delete its appdata and really start from scratch but maybe not what you want. Maybe someone else will have an idea.
  25. Almost certainly flash disconnect. Be sure to boot from a USB2 port. And be sure to make a new post with your diagnostics so we will know to visit this thread again for the new information.
×
×
  • Create New...