Leaderboard

Popular Content

Showing content with the highest reputation on 05/01/20 in all areas

  1. This is a plugin that will create a per share .Recycle.Bin folder on each smb share for files deleted on that share. Built into samba is a module called "vfs recycle" that handles the deletion of files. This plugin manages the vfs recycle settings using smb-shares.conf and starts and stops vfs recycle. Basically samba is restarted with the smb-shares.conf file configured for vfs recycle. The reason the folder name of .Recycle.Bin was chosen is so the recycle bin is a hidden folder. This is the normal way a recycle bin works. The way vfs recycle works when a file is deleted is to move the file pointers (directory information) to the .Recycle.Bin folder on the same share where the file is located. Each share will have its own .Recycle.Bin folder where the files deleted from that share are located. The .Recycle.Bin folder is created when a file on that share is deleted. A .Recycle.Bin folder per share is created and contains all deleted files for that share. This deleted file organization can be modified in the WebUI. You can set parameters to organize deleted files by user or machine and then the file structure after that. The deleted files can then be recovered as necessary by browsing to //Tower/share/.Recycle.Bin. You can also optionally set up an empty trash event (Hourly, Daily, or Weekly) that will delete files older than a configurable number of days. Deleted files are dated when they are deleted and not by their original date. You can click on a "Remove Aged Files" button on the WebUI and deleted files will be removed based on the aged days setting. You can make the .Recycle.Bin folder visible in the shares by setting 'Hide "dot" files' to no in the smb settings. You can then browse the .Recycle.Bin folders. If the folder is not visible, you can browse to it by using //Tower/share/.Recycle.Bin. I recommend setting up the weekly cron and set the age days to something that makes sense for you (default is 7 days). That will keep the .Recycle.Bin folder from getting out of hand. Note: Shares will not show up when browsing the Recycle Bin from the WebUI unless there is a .Recycle.Bin folder on that share. Click on the 'Help' button on the upper right of the menu bar and you will get explanations of the settings. The preferred way to install the plugin is using Community Applications or you can cut and paste this link into the "Install Plugin" line to install the plugin: https://github.com/dlandon/recycle.bin/raw/master/recycle.bin.plg Note: Be sure to remove any older plugin (prior to V6.3) before installing this new version. Note: If you remove the plugin, the .Recycle.Bin share folders will not be emptied. If you want to permanently remove the plugin, you need to empty the trash before removing the plugin. Excluding Shares Enter the share names to exclude from the recycle bin. You can treat these names as wild cards. Any share that has the text that is in the excluded share will be removed, unless single quoted. If the share name is single quoted, the share must match exactly to be excluded. Example: Shares: Joe Files Sam Files My Files Setting exclude to 'Files' (without quotes) will exclude all three shares. To exclude 'My Files' only set the exclude to 'My Files' (without quotes). Note: You should exclude the appdata and system shares because there is no reason to have a recycle bin on those shares and it may create more samba activity than the recycle bin can handle if there is a lot of samba delete activity on those shares. Excluding Directories You can exclude directories similar to excluding Shares. To exclude a directory from all shares (including UD mountpoints): directory To exclude a directory from a particular share: share/directory The full /mnt/user/ path is not necessary. To exclude a directory from a UD share: mountpoint/directory The full /mnt/disks/ path is not necessary. Credits: This plugin is based on the previous work of Influencer done back in the V5 days.
    1 point
  2. How to install the Jitsi stack and run through a reverse proxy. This guide uses docker compose and portainer.
    1 point
  3. Are you a professional translator in Arabic, French, Mandarin or Spanish? If so, please contact us for an opportunity for paid translation work. Here are some of the requirements we are interested in: Native level ability translating text from English to Arabic, French, Mandarin and/or Spanish. An Unraid user. Technical translation experience preferred.  If interested, please reach out directly to me and provide me info on the language(s) that you know and how long you have been using Unraid for and more information will be provided. Gracias!  谢谢 Merci! شكرا Please post any questions here!
    1 point
  4. Cpu temp is real. Tctl is the temp with 27c offset, that is used to control fan curve Sent from my MI 8 using Tapatalk
    1 point
  5. The multifunction option is kinda a workaround in certain cases, like some AMD GPUs need it. In most cases it isn't needed and most setups work without. If you kinda understand the basics and know how to fix small things, Unraid is pretty easy to get to work, but every setup is different and kinda every user has it's own small quirks.
    1 point
  6. 4kn will be fine as long as the LSI is in IT mode, and it is, but probably a good idea to update the firmware since it's on an ancient one.
    1 point
  7. That did it! Thank you very much for your help and quick response. I'll be sure to donate!
    1 point
  8. Also note that the order of the pack you have selected (once you fix the comma) will mean fuserealism is secondary to peegs, so if peegs has any textures it will overwrite fuses's. They apply bottom to top.
    1 point
  9. IT WORKS! Yeah it all makes sense to me now, but I'm not used to working on passthrough like this. Used to the GUI from ESXi ...that and I kinda trusted unRaid to just pick the right options after the devices were assigned to the VM. Lesson learned! I'll leave it as is for now but possible switch back to the flash conf method later incase I forget what I did and it's suddenly "not working" again after a device change lol. Thanks for the assist!
    1 point
  10. I'm talking about the settings in letsencrypt and not port forwarding. I don't know how spaceinvader tells you to set it up, so you should go back and make sure you did it correctly.
    1 point
  11. Correction of terminology: "array" of spinning rust. Pool can refer to the cache pool so using pool interchangeably can be confusing. SSD does not need to be partitioned like that. Just put the SSD in the cache pool and put a vdisk for the Windows VM and the plex db (and docker image) on the cache pool. It essentially achieves the same thing (just set the vdisk to be half the size of the SSD if you want to use up to half the SSD for the Windows VM). The pro of using vdisk is that it is thinly provisioned. You can mount the SSD as unassigned device (there's a plugin for that), format it with 2 partitions (warning: command line) and set things up manually (warning: manually!) and achieve what you are after. However, the performance benefits you will get (or not get) is minute compared to the complexity.
    1 point
  12. Fan control with Unraid is typically done by the mobo BIOS. The plugin supposedly works but only for some specific hardware. When you pick Auto, the BIOS picks a certain curve. 41C is hardly hot enough for the fan speed diff to be audibly difference. To sort of "test" the fan, you have to put sufficient load. There is no downside to keeping fan running at 100% except for the noise. So if the noise is acceptable to you, just keep it at 100%.
    1 point
  13. I'd like to thank the members who have reached out to me privately. I got a share going. i'm still struggling with snapshots/previous versions, but I also have a backup in order. Thanks! I know, newbie issues usually don't publicly get a lot of help around her so its much appreciated, to see them get answered even behind the scenes. I think this and the level 1 guides are great guides, but they're written by high level people that forget there are trees in the specific forest to paint, I'm just good at finding those sort of things. I'll try to put together a noob to newbie guide for the quick when I do my actual build.
    1 point
  14. Exactly way to go, usually. If you define them as multifunction, they have to be on the same bus. 05:00.01 isn't on the same bus as 05:00.0. Either try to put it also on bus 1 (bus='0x01') with function 1 (function='0x1') <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </hostdev> Or remove the multifunction completly and put both devices on different buses <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> EDIT I should've noticed this on your initial xml posted that the addressing of the bus is wrong. Device configured as multifunctional but not on the same bus. 😒
    1 point
  15. Thanks. That worked. My world now has Peegs !
    1 point
  16. @DaveDoesStuff Wait what? You can't access it over the network? Is one of the Intel Network Controllers your managing interface?
    1 point
  17. CA / FCP does this very extensively (with the dynamix.docker.manager and dynamix.plugin.manager) Only thing you have to be aware of is that there is no guarantee that the functions you want don't change from unRaid version to version. Although this is very rare.
    1 point
  18. Forgot to mention, if you switch around physical devices always make sure to adjust the entries in theat file. Removing, adding or switching a PCI device in your server might change the addressing of the devices.
    1 point
  19. @DaveDoesStuff It was only an idea to try. I had a GPU passthrough issue, a VM spitting out errors from the controller in the same group and caused random freezes of the VM. Binding it helped in my case. What you can try with the 2 ethernet controllers is not to use the syslinux config which is the old way to stub devices. Try the following: Remove the 2 nics from the VM. Remove the vfio-pci ids in the syslinux config. Create a file on the Unraid flash drive in the config folder touch /boot/config/vfio-pci.cfg and add the following line in that file BIND=05:00.0 05:00.1 Reboot the unraid. Readd the 2 controllers to the VM
    1 point
  20. @DaveDoesStuff Did you tried to also bind the "[1022:43c7] 02:04.0 PCI bridge" to vfio and pass it to the VM? Binding might be already enough. Usually you have to passthrough all members of an IOMMU group to make it to work.
    1 point
  21. login: admin, password: admin
    1 point
  22. Try plugging in the HDMI on the GPU to a monitor or something. It may need to have some type of output for it to work. Try a Dummy HDMI adapter.
    1 point
  23. Update This update changes how folders are create. Before there was a hidden docker for each folder Also added a option to have a button for the dashboard expanded, as well as maybe fixing a bug with dashboard expanded. Expanding with the buttons can cause visual issues as in the posts above. Will try and fix this but might not be worth the hassle (sorry) @GooseGoose hope this fixes your issue with the lines, if not will try again
    1 point
  24. FWIW the version I'm having issues with is 10.11 El Capitan. Presumably Time Machine functionality is meant to support more than one macos release.
    1 point
  25. There has been a lot written about it on this forum, including a topic pinned near the top of this same subforum with link to a somewhat long wiki and a link to the long thread that was very active when many people were first doing this. It depends on how comfortable you are, working with things on the server at the disk level, as to how complicated you think this is. But long story short, when you change the filesystem of a disk, it gets formatted to the new filesystem. So if you want to keep any of that disk's data, you have to put it elsewhere. From your diagnostics it looks like most of your disks are pretty full. If you can, starting with adding a new disk assigned to a new data slot and formatted XFS would be simplest, then you could move all files from another disk to that new XFS disk so you could reformat that other disk as XFS. I say simplest because if you start with an empty disk then you can be sure you have space to move all of a disk. Then that reformatted disk could be the target for moving the files off another disk, repeat as necessary, doing the smaller disks last since they don't have space for the contents of the larger disks obviously. Looks like you only have 5.5TB free so that won't be enough capacity to just shuffle things around on your existing disks. You could move everything off the smaller disk2 to other disks, reformat it as XFS, then replace / rebuild it to a larger disk and get the free capacity needed to work with those larger disks. The unBALANCE plugin can help with shuffling things between disks. You don't have so many disks compared to some, but I would expect it to take a few days to get them all done. Do you have backups of anything important and irreplaceable?
    1 point
  26. When you update the container you update everything except nextcloud. So is pacakges and dependencies. Update the container first, then nextcloud.
    1 point
  27. Q1: As long as you can boot Unraid you will be able to install the pre-clear plugin so that you can test those drives. You can do it via USB but that will almost certainly make it slower. Note that the preclear is not a fast process as it involves reading and writing every sector on the drive. With 10TB drives it is going to take days for each drive. There is then the question of whether the ‘preclear’ state will survive moving the drive from a USB enclosure to a SATA connection. It depends on whether the USB enclosure presents the drive to the system identically to the way a SATA connection does. In my experience most enclosure do, but not all. However as your primary purpose at this point is to stress test the drives that is not important. When you first set up the array whether a drive is pre-cleared or not makes no difference. It is only later if you try to add a new drive to an existing parity protected array that this distinction matters at all, and even then it still works but just takes longer if the preclear state is not recognised. Q2: For your Use Case using the NVME drive as the cache seems a very sensible thing to do. Note that whether ordinary file writes go via the cache is configurable at the share level so it is not uncommon for users to always write directly to the array bypassing the cache for media files and only use the cache for application/VM purposes. during your initial data transfer you definitely do NOT want to go via the cache as it would not speed the process. Also for the initial transfer if you do not mind the data being unprotected do the load without assigning the parity drive as having a parity drive significantly slows down writing to the array due to the way the system maintains parity in real time. If you DO want parity protection from the outset then enable the “turbo” write feature which speeds up writing at the expense of keeping all drives spinning. Parity can be added at any point so when to do it is up to you.
    1 point
  28. I would think this would de a good idea as it protects you against cache failure
    1 point
  29. 1 point
  30. I had the same problem. Remove the duplicate one in your subdomain conf file and the warning should go away.
    1 point
  31. Because its first in the list. Drag the rows to rearrange the order
    1 point
  32. A Titan might be one of the few cards that it actually matters on since it might actually be hungry enough to saturate the bus... Most of us can only dream of having that problem from afar... 😄
    1 point
  33. @mbc0, @CommandLionInterface, @CHBMB I requested, and was granted, a change to the docker container. The solution to all your write-on-samba problems is now here. First, make sure the container is up to date. Then, edit the docker container and go to "Add another Path, Port or Variable" The Config type is variable, the Key is "UMASK_SET", and the Value is "000" (all without quotes). Click on Add, then Apply. Afterwards open the WebUI for the syncthing docker container. Go to Actions > Advanced. On EACH of the Folders you have synced to unRAID, check the "ignorePerms" box, and click save. Now, when new files are synced over to unRAID, they can be written to over the network share. If existing files cannot be edited, use the "New Permissions" tool in unRAID or the "Docker Safe New Permissions Tool" plugin to fix the share that has the problem.
    1 point