• Posts

  • Joined

  • Last visited

sdfyjert's Achievements


Newbie (1/14)



  1. Hi, I just upgraded my rig from an old A10 to a decent i7 with an Asus Prime with Z390 chipset. Now that I have the fire-power I wanted to play a bit with the VMs. I wanted to passthru the ALC but it's nowhere to be found in the audio devices and then also nowhere to be found in the IOMMU groups. My A10 board also had an ALC sound card (really old one) and it was always there listed. The one shipped with the Asus Prime Z390 is ALC S1220A 8-Channel (which normally should be by default included with the Linux Kernel). Any ideas / clues / hints how to make unraid aware of the audio card?
  2. I was hoping to use them to indicate purchase source and invoice number 😃
  3. version: 6.9.1 2021-03-08 Comments work fine on the other (data) drives in the array. In the Parity drive however it will not persist it.
  4. Update to v6.9.0 was flawless so I installed v6.9.1 without much thought.... lesson learned. 1st reboot... nothing working. No network no nothing. Cold sweat started running already... move the NAS to a display or a display to the NAS.... either way this is going to be a disaster... 2nd reboot... (just in case)... NAS is accessible through the network! Not too bad! Starting the array... (more cold sweat) Array stared... Parity Check started (oooookey, so no clean shutdown/reboot obviously there), Opening Docker page... big red letters, Containers are starting... 5 minutes later... no change, reload page, still containers are starting... damn... ok, parity does add some performance hit to the A10 tiny CPU running it 8 minutes later, reloading page.... Ahhhh the red letters are gone, all containers appear to be running! Next check, running some manual tests to verify all custom user-scripts have run... To my big surprise at this point, they did. This was a bit more excitement that what I am looking for from a NAS but definitely time for some beer after all that cold sweat. Cheers, guys, thank you for all the hard work and for the Docker upgrades that appear to be coming out
  5. I am trying to run some command and push them the in background but the GUI terminal seems to get stuck and the script doesn't really seem to finish (I do not see the echo in the window and running ps aux | grep avahi-publish show the process does not exist). Any suggestions? #!/bin/bash #description=Generate avahi aliases #foregroundOnly=true #name=Avahi aliases (sub-domains) echo "plex.inas.local" /usr/bin/avahi-publish -a -R plex.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) &
  6. Almost got it working (was easier than expected). Here's what I have so far - for however might be interested in going down that path Step-1: Change the default ports for the unraid user interface Go to http://inas.local:8008/Settings/ManagementAccess and change http and https ports. I set them to 8008 and 8443. Just remember to check your ports are not already used by some docker container, etc. Step-2: Install Nginx Proxy Manager (docker from user-apps) - or build your own container. I had my own configs ready from the past server but Nginx proxy manager works just as well and as an added bonus you get a comfortable web-gui with it. Step-3: Configure your subdomains (depends on how you went with step-2) Step-4: Install user-scripts plugin (can put them in /boot/config/go but better skip the pain. Click to create a new script and add the avahi commands in there. For each subdomain add the following command /usr/bin/avahi-publish -a -R subdomain.domain.tld unraid-ip & ## ## In my case, for my unrain registered with inas.local ## To add jackett.inas.local ## Instead of the IP I used avahi-resolve to actually aquire the IP for the inas.local domain ## This way if the unraid IP changes you do not need to change IPs in the script. /usr/bin/avahi-publish -a -R jackett.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) & This is how my final script looks like #!/bin/bash #description=Generate avahi aliases #foregroundOnly=true #name=Avahi aliases (sub-domains) echo "sonarr.inas.local" /usr/bin/avahi-publish -a -R sonarr.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) & echo "plex.inas.local" /usr/bin/avahi-publish -a -R plex.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) & echo "proxy.inas.local" /usr/bin/avahi-publish -a -R proxy.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) & echo "sonarr.inas.local" /usr/bin/avahi-publish -a -R sonarr.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) & echo jackett.inas.local /usr/bin/avahi-publish -a -R jackett.inas.local $(avahi-resolve -4 -n inas.local | cut -f 2) & Next steps (help wanted and appreciated) 1. Utilise the args in user-scripts (can we have variable number of variables?) and adjust the script to use them accordingly. 2. Wrap the avahi-publish in a script to monitor for network changes and react accordingly (interface went up, restart avahi-publish, etc). I acknowledge my documentation is insufficient to help people with little knowledge on the subject so feel free to ask and I'll do my best to answer.
  7. First tests - just manually running avahi-publish, reveal I forgot one important component. I need to also proxy the unraid web interface. This means I will need to move it to a different port and put it also behind nginx proxy. Is it feasible to change the unraid web interface port or everything will break?
  8. Hey guys, Before moving to unraid I had an ubuntu server installation for NAS. The only thing I miss from my old setup is the aliases I had configured for avahi. I still have the script I had build for the aliases which was utilising systemd to start bind to network events. I would really love to put that back in action. In practice, this is the service I had created for systemd in ubuntu server [Unit] Description=Publish %I as alias for %H.local via mdns BindsTo=sys-subsystem-net-devices-enp5s0.device Requires=avahi-daemon.service [Service] Type=simple ExecStart=/bin/bash -c "/usr/bin/avahi-publish -a -R %I $(avahi-resolve -4 -n inas.local | cut -f 2)" [Install] To use the service it was as simple as creating and starting a parameterised service systemctl enable inas-avahi@plex.inas.local systemctl start inas-avahi@plex.inas.local I know unraid is using avahi but I am not familiar with the folder structure and how to make the changes also permanent (some startup scripts that copy files I presume?) So, where are the files I would need to touch? Also, given unraid is not using systemd, I presume I will need to write an equivalent init.d script or there's some way with unraid? Looking forward to info to get this started. Once ready I will share off course whatever scripts will come out of it so anyone can easily reuse them. Cheers.
  9. Update coming next morning, the files that were removed from the share but where still on the disk reappeared on the share 🤔 I removed them again from the share and this time they're gone for good. ssh, `ls /mnt/user/movies > /mnt/user/tmp/movies_share.txt` and `ls /mnt/disk1/movies > /mnt/user/tmp/movies_disk1.txt` and run a diff on the files (though I'm dead certain there's a faster way with pipes) I understand how 1+1+1+1 = 1 works (🤣) I thought maybe during parity build it locks some sectors? If not the case and considering it's fs driver level perhaps something relating to concurrency. If it happens again I highly doubt it will be of any use but here it is for what it's worth
  10. Here's the situation, I deleted some files (movies) using Plex. The titles got removed from Plex and the files no longer exist in the respective share. After noticing that the disk space is not freed I checked the actual hard disk (i.e. /mnt/disk1/movies) and the files are still there. Clues The array is entirely on xfs (latest version of unraid). At the same time the parity was being built (parity first time - just added the parity drive today) There was some "intense" activity throught the array and caches (fyi: this share does not use cache, cache = NO) Is this a bug or something expected? Will the files be automatically removed later (ie. after the parity build is finished) or I have to manually go and remove them?
  11. Changing the cache to YES for that share would actually do me more harm. I have a limited amount of cache (SSD drives) and they're primarily used with shares related to video-editing. If I would set the movies share to cache=YES then it would easily saturate the cache 'causing issues to the shares that really need it. Regarding the network based approach, moving files in a machine from disk to disk is significantly faster than doing it over samba through another machine. The same file that I can move on the machine in 1 minute doing it over Finder as mounted network storage (sambar) will take significantly more time. Given there's no "native" file-explorer in Unraid another temporary solution would be perhaps to "monitor" the FS for changes and when this behaviour is recorded invoke the mover for the particular files (and god forbid your array does not run out of space in the meantime 😂). As this behaviour could 'cause issues and headaches to people unaware of it (I only noticed it on time because the "Fix Common Problems" plugin spotted it), in my opinion this should be addressed in OS level so the user settings are 100% applied in real-time.
  12. +1 for me as any alternative can yield surprising results for example
  13. ah I see. So, either copy/delete or explicitly mv directly to one of the array disks. Is there some script I can install to "replace" mv that can read the settings and determine if mv or copy/delete is required and perform it automagically?
  14. Hi guys, I got a weird behaviour here and would like to know if this is a bug or I've done something wrong. I have a share called movies. The share is configured to live only on the array (no cache). Config looks like this. I have another share which lives only on the cache called tmp I `ssh` to the nas and move a file from tmp to movies mv /mnt/user/tmp/jdownloader/movie.mp4 /mnt/user/movies/. The end result is that a folder movies is created in `/mnt/cache/movies/movie.mp4`. The expected behaviour would have been that it was moved to the array at the designated folder since that share is explicitly not using cache and as such the mover is never expected to run for this share. Is this a bug or have I misunderstood something?