Leaderboard

Popular Content

Showing content with the highest reputation on 01/27/23 in all areas

  1. Uncast Episode XIV: Return of the Uncast with Bob from RetroRGB Season 2 of the Uncast pod is back and better than ever, with new host Ed Rawlings, aka @SpaceInvaderOne 👾 On this episode, Bob from RetroRGB joins the Uncast to talk about all things retro gaming, his discovery and use cases for Unraid, a deep dive into RetroNAS, and much more! Check out the show links below to connect with Bob or learn more about specific projects discussed. Show Topics with ~Timestamps: Intro from Ed, aka Spaceinvader One 👾 ~1:20: Listener participation on the pod with speakpipe.com/uncast. Speakpipe will allow you to ask questions to Ed about Unraid, ask questions directly to guests, and more. ~2:50: Upcoming Guests ~3:30: Bob from RetroRGB joins to talk about Unraid vs. prebuilt NAS solutions, use cases, and RetroNAS VMs. ~6:30: Unraid on a laptop? ~9:30: Array Protection, data recovery, New Configs, new hardware and client swapping. ~11:50: Discovering Unraid, VMs, capture cards, user error. ~17:30: VMs, Thunderbolt passthrough issues, Thunderbolt controllers, Intel vs. AMD, motherboard hardware, and BIOS issues/tips. ~21:30: All about Bob and RetroRGB. ~23:00: Retro games on modern TVs and hardware and platforms. ~24:34: MiSTerFPGA Project ~27:15: RetroNAS ~30:30: RetroNAS security: Creating VLANs, best practices, and networking tips. ~37:15: Using Virtiofs with RetroNAS on Unraid, VMs vs. Docker, and streamlining the RetroNAS install process. ~43:13: Everdrive Console Cartridges and optical drive emulators. ~46:50: Realistic expectations and advice to new retro gaming enthusiasts. ~51:05: MiSTer setup how to's and retro gaming community demographics. ~55:45: Retro gaming, CRTs, emulation scaling, wheeled retro gaming setups, and how to test components and avoid hardware scams. ~1:05: Console switches, scalers, and other setup equipment. In the end, it all comes down to personal choice. Show Links: Connect and support Bob: https://retrorgb.link/bob Send in your Uncast questions, comments, and good vibes: https://www.speakpipe.com/uncast Spaceinvader One interview on RetroRGB MiSTer FPGA Hardware https://www.retrorgb.com/mister.html RetroNAS info: https://www.retrorgb.com/introducing-retronas.html Other Ways to Support and Connect with the Uncast Subscribe/Support Spaceinvader One Youtube https://www.youtube.com/@uncastpod
    3 points
  2. 3 points
  3. Not related to your original problems, but your appdata, domains, system shares have files on the array. In fact, domains and system shares are set to be moved to the array. Ideally, these shares would be all on fast pool (cache) so Docker/VM performance isn't impacted by slower parity, and so array disks can spin down since these files are always open. You have some unassigned SSDs mounted. How are you using these? Might be better as additional pools instead of unassigned.
    2 points
  4. Regarding benchmarking solid state drives, I ditched the "dd" command in favor of the "fio" utility. I hope to have the current bugs fixed soon along with switching to using fio. I will still utilize dd for spinners. The dd command simply had too much overhead in pulling from /dev/random. While not as much overhead, there still was some utilizing /dev/zero. Using the fio utility, I was able to utilize the full bandwidth.
    2 points
  5. LXC (Unraid 6.10.0+) LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel. This plugin doesn't include the LXD provided CLI tool lxc! This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly. Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root ATTENTION: This plugin is currently in development and features will be added over time. cgroup v2 (ONLY NECESSARY if you are below Unraid version 6.12.0): Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work unless you enable cgroup v2 To enable cgroup v2 append the following to your syslinux.conf and reboot afterwards: unraidcgroup2 (Unraid supports cgroup v2 since version v6.11.0-rc4) Install LXC from the CA App: Go to the Settings tab in Unraid an click on "LXC" Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update": ATTENTION: - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running. - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)! - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)! Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No: Now you will see LXC appearing in Unraid, click on it to navigate to it Click on "Add Container" to add a container: On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create": You can get a full list of Distributions and Releases to choose from here The MAC Address will be generated randomly every time, you can change it if you need specific one. The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later). In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) : After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start": If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself: By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself: For further information on the configuration file see here Now you can attach to the started container by clicking the Terminal symbol in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash): You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use. I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade): Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself. To install the SSH Server (for Debian based containers) see the second post.
    1 point
  6. I've installed SAS spindown on my Unraid 6.11.5 and it works fine. I just have to say initially it wasn't working at all then I just reinstalled the plugin and rebooted the server and since then it started working perfectly... I'd just suggest to reinstall and reboot the server, then try to spin down a SAS drive while the syslog is open and you should see something as follows. Btw, the log you've to check is this one. PS I don't know for other users if it's the same, but some drives' models require more time to spin down, but it's not a problem for sure... Here I tried to spin up and then spin down all the drives, you can test if everything works doing the same. I'm really thankful to the creator of this plugin, I have 8 SAS drives in my build and I'm constantly saving 80w!
    1 point
  7. So everything is back to normal 🎉 Thank you everyone for the help and support. dansunraidnas-diagnostics-20230127-1851.zip
    1 point
  8. ja, das ist natürlich richtig! Der Container Ersteller muss das "Feature" in seinen Container eingebaut haben. Am besten du schaust auf der Seite des Erstellers (hub.docker.com, github, ...), oder im Unraid Support Thread ob eine entsprechende Option genannt wird / dokumentiert ist.
    1 point
  9. I wasn't able to save much of anything. Thanks for your help though. Reformat complete. Docker containers reinstalled using pre-existing templates. Reconfiguring containers and rebuilding VMs.
    1 point
  10. Not normal. I run pfSense as a VM, and don't have any of the listed issues. Running your gateway as a VM can cause issues, but not if you have things configured to account for them. In my case, Unraid has a 10GB interface with a fixed IP connected to the main switch, and the pfSense VM has 2 1GB connections passed through, the WAN assigned interface connected to the modem, the LAN assigned interface connected to the same switch as the 10GB. Other than not having DNS and WAN connectivity until the VM is running, Unraid handles it just fine.
    1 point
  11. Apologies. Ignore me, as I just realized that this is the non-Alder Lake thread.
    1 point
  12. I'd like to just chime in here to add to the list and confirm that I have no hang or crashing issues on 6.11.1 with an i5-12600k running on an MSI PRO Z690-A board. I have Windows VMs, about 20 docker containers, plex transcoding AND jellyfin transcoding confirmed to be working with no crashes or hangs. I have a monitor hooked up to the server directly at all times. I've never tried to run the transcoder with it unplugged. At the start of all of this, I was experiencing the hangs as described in the original report. It just took time for both unraid to release an update and for Plex to fix their transcoder. Before I enabled it, I was getting crashes that logged to syslog randomly, and ich777 was kind enough to help and suggested I switch to ipvlan from macvlan which fixed those crashes. Then I re-enabled plex and jellyfin transcoding and it all works now. I've been crash and hang free for over 3 months.
    1 point
  13. Unter Unraid ist das standardmäßig root. Diese Entscheidung ist vom Container abhängig, vom Autor. Weil das eher Docker betrifft. Unraid ist nur der Host. Warum fragst Du?
    1 point
  14. I have the same motherboard as you (Asrock H370) and can reach C7. Have you enabled all the ASPM settings in the BIOS? I also disabled the sound card and a few other settings in the BIOS. My Hardware Motherboard : ASRock H370M-ITX/ac Processor: i5-8500T Memory: 32GB (2x 16GB DDR4) Disks: 3x 16TG Seagate Exos. 1 x 8TG Iron wolf, 2x 6TG Iron wolf, 2x 3TB WD Greens (currently in the process of clearing so I can remove) 2x 500GB crucial SSD's 1x Asus BW-16D1HT Bluray drive NVME 1TB Crucial P5 Plus 1TB M.2 PCIe Gen4 (should have got a 2TB :-)) in the m.2 slot Array is connected via an Broadcom LSI 9207 SAS PCIe Card PSU: Evga 500 W1, 80+ White 500W, Power Supply 100-W1-0500-K3 UPS: APC Smart-UPS 1000 I did the install one by one method last weekend and with everything installed except the SAS card and I was able to reach C7 states and was drawing 34W under idle (No HD's attached except the SSDs). With the SAS card I could only reach C3 I also noted a maximum of 150w load on boot. With everything installed: Under a full parity sync I'm getting drawing 107W. Note the 107W also includes: 16 port Unifi managed switch A Unifi Controller Key Unifi Nano HD Wifi (via POE) + RSP pihole They are all plugged in to the same UPS I'm using to measure. I'm in the process of removing the SAS card and putting the NVME drive in its place. Then putting a 4 port SATA pcie card into the M.2 slot (mainly because I already have the card) Along with that I'm also of installing some power monitoring plugs so I can record the server separate from my other gear. Arriving today:)
    1 point
  15. you can add the following with no issues i9 10850k on asrock z590 i9 9900 on msi z370 i5 2405S on asus P8Z77-V LX and 1 more asus with an celeron which is currently offline all have no issues with the intel igpu on the latest unraid releases in plex, the 10850k also no issues with ffmpeg encoding like unmanic (others are not used therefore, only plex).
    1 point
  16. Ok, I see the potentiel problem, it's happen to me the first days of transition to the VM, so i will follow this recommendation ahah ! 😄 Thank you for this prompt answer ! Have a great day !
    1 point
  17. Depends, GAME_PARAMS are appended on start from the dedicated server, usually you don't need anything in there, this is just if you have some weird mod or want to add a start parameter for whatever reason. Validate installation is basically the same as if you validate the game files on Steam (please never set this to true without a good reason because this will slow the start process from the container because it checks the game files if set to true on every container start). Explained WS_CONTENT above, if you don't need it, let it empty.
    1 point
  18. i did the exact same thing last week. went from 6.8.3 to 6.11.5. started crashing during plex transcodes. reverted back last night. guess i'm stuck on 6.8.3 forever on that box.
    1 point
  19. no performance issues, but, if you have a active docker running using the driver and turning on the VM ... this can result in a crash overall as its a either or usage only ... so either your VM using the GPU in passthrough OR the host using it, as sample for a GUI usage, docker usage, ... just be aware
    1 point
  20. I hope you've seen the Variable WS_CONTENT in the template, I can tell for sure that it's working because a buddy has also a server running with some mods on it. Please keep in mind that if the container starts to loop and doesn't starts properly one of the mods is not working properly.
    1 point
  21. Hello I have fixed the problem by resetting configs and wiping my drives via reformatting. I was having problems with other docker containers as well so Idk what was going on but resetting everything seems to have fixed everything. Got the server running without workshop content now attempting to add it.
    1 point
  22. it was a fan issue with asus x470 boards and the fan controller that's now in the base unraid/linux image bug out causing all fans to stop after a random amount of time, within a week or so. I have a file in 'config\modprobe.d' named 'disable-asus-wmi.conf' With the text of '# Workaround broken firmware on ASUS motherboards blacklist asus_wmi_sensors' So an update while I was awaiting a reply. I re downgraded back to unraid 6.10.3. tried the usual stuff and failed. Re updated back to 6.11.5 did the usual thing and now it works. I have no idea. but it has persisted through several reboots and transcoding now works on the GPU. Best guess something just stuck initially and fixed itself when I cycled the down/upgrade cycle
    1 point
  23. Hi everyone, thougtht I'd update the thread and close it out, since it seems to be working now. The cause was my network switch (HP ProCurve 1800), which I reset to factory settings. Thank you everyone for the help, I didn't think this could happen since I hadn't configured any of the features of the switch.
    1 point
  24. You will see the container memory only with cgroup v1. I‘ve already looked into this and there is no easy fix for this on Unraid but I‘s on my to do list. Certain file paths are perhaps too long but it will create the tar file anyways. Have you yet tried the integrated snapshot feature?
    1 point
  25. Great, everything in those folder will survive container update since they are persistant data.
    1 point
  26. Nice news! glad you get there!
    1 point
  27. For me there was a chart in the swag log with all the files that were outdated (around 8 files for me). I replaced them all with the new sample files. I was using 3 .conf files that I had to edit to match my old server names from when I set it up. Do you have a list of out dated files in the log? You will likely have to restart Swag to see them as my log was spamming an error.
    1 point
  28. I updated my docker and it didn't come back up properly. I restarted it. The log now just shows: [migrations] started [migrations] no migrations found ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- **** Server already claimed **** No update required [custom-init] No custom files found, skipping... Starting Plex Media Server. . . (you can ignore the libusb_init error) [ls.io-init] done. When I go to https://ipadress:32400/web/index.html I just get: This XML file does not appear to have any style information associated with it. The document tree is shown below. <Response code="503" title="Maintenance" status="Plex Media Server is currently running database migrations."/> Any ideas where to go from here?
    1 point
  29. There have been issues with onboard Ryzen SATA controllers in the past, where they dropped out under intensive use, like parity check, disk rebuild, etc, it's much less frequent with latest Unriad and newer kernel, but t still happens, also look for a BIOS update, sometimes can help.
    1 point
  30. Falsche Annahme. Der Unraid Cache ist ein Schreibcache für neue Daten, kein klassischer Lesecache. Einfach mal die Hilfe auf der Detail-Seite eines Shares aktivieren (achte zusätzlich auf das kleine Wörtchen new). Den Lesecache übernimmt Unraids Linux Unterbau. Und das eigentlich sehr gut.
    1 point
  31. This is what i did to fix it. Stop Swag docker Go to \\<server>\appdata\swag\nginx folder rename original nginx.conf to nginx.conf.old copy nginx.conf.sample to nginx.conf rename ssl.conf to ssl.conf.old copy ssl.conf.sample to ssl.conf restart swag docker This worked for me
    1 point
  32. I've been running unRAID for at least 6 years. In that time I've had 2 different builds, but I've used the same USB key for both. I was skeptical about it at first but now I think the OS being held on a USB is one unRAIDs best features. Frees up a SATA/ nvme port for more storage plus makes it easier to move to new hardware.
    1 point
  33. Please try to use ipvlan instead of macvlan in your Docker settings. I would recommend to create a dedicated bug report since this seems not related to this issue.
    1 point
  34. You can only specify a single sub-directory for a share. Look at the Help: List of directories to exclude from the Recycle Bin separated by commas. To specify a particular share directory, use 'share/directory'. You can specify up to one sub-directory. Unassigned Devices are specified the same way using 'mountpoint/directory'. Wild cards '*' and '?' are allowed in the directory name. So you specify 'pool/Backups' for the complete 'Backups' directory. If you are lookiing to exclude only the 'pool/Backups/Portables', specify it like 'pool/Portables'. I see an issue with specifying multiple share folders. I think this should work 'pool/Backups,pool/Portables', but it looks like only 'Portables' is excluded. I think the Recycle Bin should exclude both 'Backups' and 'Portables' when specified this way.
    1 point
  35. unraid 6.92版本有问题 ,我的配置是 5700g 32g 3200频率内存条 主板微星b450m 迫击炮 ,6.92时候怎么搞都不行显卡直通后来 直接用 6.11.5 版本直接搞定了 在win10 虚拟机 显示ok ,方法从官网下载微星主板对应的固件 然后用UBU_v1_79_17 提取vblos 固件 就是 vbios_1638.dat 这个文件 然后其他就是照常配置虚拟机就行了 然后虚拟机里面安装驱动可以正确识别 apu 显卡。我的建议是如果是新的硬件尽量用新的unraid 版本!
    1 point
  36. I was able to temporarily fix this issue by adding to my syslinux configuration. This is letting me limp along for now but it appears to be causing issues with some of my VMs. I got the idea from this thread: https://forum.level1techs.com/t/solved-ubuntu-22-04-on-asus-wrx80e-sage-not-detecting-usb-and-m-2/186001/3 Still searching for the right answer.
    1 point
  37. In der Übersicht auf das Disk-Symbol klicken. Dann geht ein Pop-up der jeweiligen Disk auf.
    1 point
  38. It is here: https://wiki.unraid.net/Manual/Security#Securing_webGui_connections_.28SSL.29 Read down to the heading titled "How would you like to access the Unraid webGui while on your LAN?" and then "Https with Unraid.net certificate - with fallback URL if DNS is unavailable"
    1 point
  39. I think it would be great to do sich things throught the CA App rather than through the plugin itself, this will be only integrated for testing. I have already made two scripts. The first will install a full (minimal) Desktop Environment and the second one will install Homeassistant Core to a fresh installed Bullseye container, these two scripts are more a PoC and for testing, but the work perfectly fine. 😁
    1 point
  40. Hi all, any chance of a mobile friendly ui being implemented? Current experience on mobile is terrible.
    1 point
  41. That's because unraid isn't meant to be enterprise software or externally accessible via webui. Unraid software is SMB at best and at worst more of a homelab software. As a security engineer, if I suggested Unraid in my work environment, without being utterly facetious, which is a more than 40k user enterprise and is subject to fedramp and hippa, I would probably be fired just for making the suggestion. I do think mfa on everything is a good standard to hold ourselves to...however mfa isn't a replacement for good security practices. I agree it should be on the road map, but high priority...? Honestly, if you throw out your unraid server admin ui and ssh wide open to the internet, or allow your wan-network facing dockers to be privileged, you shouldn't be running unraid in the first place. You should be learning basic network security. As people have said use a VPN, or a remote connection to a different PC on your network to access your admin UI when not there. Or even use the unraid MyServers Plugin and have MFA on your unraid community account. https://blog.creekorful.org/2020/08/docker-privilege-escalation/ here is a good example why you should not run your dockers as privileged. here is what privileged actually does: https://www.educba.com/docker-privileged/
    1 point
  42. This is in fact exactly the problem. In my XML, I also had `type=raw` which when changed back to `qcow2` yielded a successful startup.
    1 point
  43. I think the glaring issue is that this thread seems to imply that the unraid user interface, or server itself should be hardened against external attacks. This would mean that unraid itself is exposed to the external network/internet, which basically just shouldn't be the case. This is a big clear red "don't do that." Instead, use a reverse proxy to get services running on the unraid server exposed to the outside world. As far as getting access to unraid itself exposed to the outside world, if you absolutely must, I would use something like Apache Guacamole with 2FA. This way the server itself is never exposed to the outside world, and your interface to it is protected with 2FA. I don't think this is something in the scope of unraid to develop a secure remote access implementation. I don't think the WebUI has been scrutinized with penetration testing, and I don't think a system with only a root account should ever be exposed to the internet directly.
    1 point
  44. Hi. (Un)fortunately, I deal with security every day at work. Your point is valid as long as you are referring to Unraid being used in a home setting. However, in an enterprise (or, maybe in Unraid's case, SMB) environment, perimeter-based security is (rightfully) considered an antiquated concept and each server needs proper protection, regardless of ingress sources. This means that MFA is, indeed, a must. My 2c. Edit: Also, with the new "My servers" plugin, even home configurations can be exposed, so I hope MFA finds its way in that online design.
    1 point
  45. I would stop the docker service, (Settings - docker), delete the image, then re-enable the service followed by Apps, Previous Apps and check off what you want back installed. If that doesn't work, post up your Tools - Diagnostics. Each app has it's own permission requirements that may or may not be compatible with SMB. Quite normal and to be expected. By running New Permissions and including the Appdata share you may (or may not) impact the ability of the container to run (although at first thought this doesn't appear to be why yours refuses to run, unless you were also running a docker folder instead of an image in which case yes you've completely trashed the image, hence my comment above. This is the reason why (if you've got FCP installed) there also exists a Docker Safe New Permissions Tool which will not let you run against the appdata share.
    1 point
  46. While I agree with you that the security in UnRAID seems pretty weak at default settings, your router admin page should not be accessible from the outside if you configure it correctly and keep it up to date. You highlight a big problem though, default settings in all these docker containers we pull, and I think that boils down to the individual user and the software being used. Your friend is tech savvy enough to setup his own OMV on UnRAID so he should definitely be techy enough to know to change the default admin password. And the software should be made in such a way that default passwords are a major error event that fires warnings everytime you log in to it. 2FA is in my opinion a complementary security feature that should not keep a software secure on its own. But I hope some big steps are taken in regards to security by the UnRAID team going forwards. I'm still on my trial period with 12 days left and I really love UnRAID but I keep being scared on some security defaults (SSH enabled with password even though the keys are generated and stored on flash, no simple switch in UI to disable PW logons, why???). Root as default user, major functionality put in the hands of the community (Fix Common Problems etc) which is a huge attack surface because I guess these plugins in UnRAID run as root? It only takes one big community addon to be hit and a lot of servers will be infected, and I guess UnRAIDs stance on this issue will be something along the lines of "you used community addons on your own risk", which is true. Sorry if I'm ranting in an somewhat unrelated thread as this post is more about general security on UnRAID.
    1 point
  47. Thanks for the help in this post, works fantastically. TL;DR of original, here's a step by step guide for creating a mount on VM creation (as of unraid 6.8.3). In the "Unraid Share" section, select the unraid folder that you want to make mountable. This can be an individual share or a parent directory of the share for multiple. e.g `/mnt/user` In the "Unraid Mount tag" section, enter a tag name, this can be anything and will be passed to the VM. e.g `myMountTag` Complete VM setup, power on and install your VM OS or normal. The following steps require root/sudo user. Make a backup copy of fstab in case you mess up your configuration `sudo cp /etc/fstab /etc/fstab.orig` Create a target mount directory where you want to mount your share e.g. `sudo mkdir /path/to/myMountedDir` Edit `/etc/fstab` config by adding the following line to the end of the file, (change tag & path to your needs) `myMountTag /path/to/myMountedDir 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0` Save fstab file and run `sudo mount -a` to check your mount works (there should be no output for on success) You should now have a mounted share in your VM Futher detail For anyone new to unraid, looking for an explanation as to what the fstab values are, here is an explanation <device>: myMountTag <mount point>: /path/to/myMountedDir <file system type>: 9P (The protocol that QEMU uses for a VirtFS) <options>: trans=virtio,version=9p2000.L (our transport for this share will be over virtio, and we specify the 9P version (2000.L) because the default for QEMU is 2000.U. "L" has better support for ACLs, file locking and more efficient directory listing, deletion edge cases etc) _netdev (tells the system that this mount relies on the network, and to delay the mount until a network is enabled) rw (mount as read/write) <dump>: 0 (disables backup via the dump command) <pass num>: 0 (disable any error checking) Cheers!
    1 point