Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Read up on rclone mount. I think you are confused about rclone sync (which is one way, it runs until the sync is complete) and rclone mount (which is a mount point, it runs until the process is killed or the server is shutdown) Essentially rclone mount acts like a local folder, except what you read from the mount point is directly from the cloud and what you write to there is directly to the cloud. You don't use rclone mount to sync. You use syncthing to sync WITH a rclone mount (with = to + from <-- which direction it goes depends on which side syncthing determines to be newer). Obviously, this requires a good Internet connection and enough memory that rclone doesn't get killed due to out-of-memory errors. May be read this topic below on how to integrate rclone to Unraid with DZMM's excellent script. That may make things clearer.
  2. Syncthing acts as a remote but it still saves data locally. You can set up 2 different Syncthing dockers with different names and folder mappings and unique IP (may not even need unique IP as long as unique ports) and you will be able to sync between 2 local folders (acting as 2 separate remotes). e.g.: /mnt/user/folder1 -> Syncthing docker A <--- handshake ---> Syncthing docker B <--- /mnt/user/folder2 Rclone sync is one-way but rclone mount is a real-time-ish representation if the cloud storage. That means you can integrate rclone to the equation by mapping a rclone mount to the syncthing docker. e.g.: mount gdrive and onedrive using rclone to /mnt/user/mount_rclone/gdrive and /mnt/user/mount_rclone/onedrive then: /mnt/user/mount_rclone/ -> Syncthing docker A <--- handshake ---> Syncthing docker B <--- /mnt/user/sync/ You can also use VM too but it is extremely convoluted as you mentioned network drive for whatever the f reason is heavily frowned upon by cloud storage providers. You therefore need a vdisk that matches the cloud storage size dedicated for the saving of such data + syncthing to create the remote which then handshake to the "docker B" in the example above.
  3. There's no diff if you account for the diff in the actual devices plugged into your system vs mine. For example, you have the GTX 1660 and I have the GTX 1070. Yours has 4 "functions" (42:00.0 -> 3, which are the GPU, the HDMI audio and 2x USB devices respectively). Mine only has 2 "functions" (GPU + HDMI Audio). All 4 functions of your GTX 1660 must be passed through together to the same VM for it to work. (in the same way that all 2 functions of my 1070 must be passed through together to the same VM). You need to watch SpaceInvaderOne tutorial on Youtube on how to stub (aka bind) PCIe devices (e.g. "functions") so they show up on the VM template for you to include in the pass through. (I think it's in the vid on passing through USB controller). And don't forget to dump your own vbios. If those things still don't work then share your xml and I can have a look. PS: ignore any PCIe "bridge" device. They should not be passed through and will not interfere with pass through.
  4. It's rather peculiar - your spindown is 0 1 2 3 then 29. How is your array set up?
  5. In addition to the 4 things in the post above, you also need to provide details on how you got the rom file. On TechPowerUp, there are 2 different versions of the STRIX OC as well. It's a pretty common new user mistake to get the wrong vbios. The only way to be sure you have the right file is to dump your own. Also, do NOT turn off Hyper-V. That workaround is outdated. The current workaround is to keep it on.
  6. Then keep in mind that there are outstanding performance / unnecessary wear issues with btrfs cache pool so if you can live with it, have a single-drive xfs-formatted cache pool instead.
  7. Turbo write means you typically don't need a cache pool as most of the time it would saturate gigabit bandwidth. The archaic use of SSD cache pool as write cache is generally no longer a need. So you will only need an SSD if you want to use it for docker and/or VM. Make sure to read the help box about what the various Cache settings do and make sure you understand it, especially what would trigger mover run (or not). It's a common new user mistake to set this wrong based on what the user thinks the settings does instead of reading the Help box.
  8. Are you booting in GUI mode? Try booting in command line mode to see if it helps.
  9. This is shfs overhead (the Unraid share functionality). I worked around it by creating custom smb config to expose my /mnt/cache/share
  10. You are overthinking it with regards to CPU idle power consumption. The diff in idle consumption between an 8100 and 9700 is so small that if the electricity cost diff is a concern to you then you probably should be more concerned about necessities such as food and water instead. The reason dedicated graphics card used by Unraid doesn't idle properly is because Unraid doesn't contain proper AMD/Nvidia drivers. If you pass through the card to a VM then the card would run with the right drivers (in the VM) and thus would idle properly. (that is assuming there's always a VM using the card at all times, which should be the case - there's no point shutting down the VM while Unraid is running) Your CPU choices have iGPU so you should configure your BIOS to boot Unraid with the iGPU, so back to my first point about CPU idle power consumption being a non-concern.
  11. 1. Yes. iGPU is only good enough for some games and that's bare-metal. Adding VM overhead and most games would be unplayable. What games are you playing? 2. It depends on the dedicated GPU. A GT710, of course not. A RTX 2080 Ti, of course yes (by a massive lot). And everything in between. 3. For gaming, unlikely. The graphic card is almost always the bottleneck.
  12. You mentioned "vmdk" in the original post so that was the misleading part. There's nothing looking wrong with you xml. What do you mean by "not starting"?
  13. vmdk? Check your xml to see if it uses the right vdisk format. Unraid will only use RAW or QCOW2 by default so other format e.g. vmdk needs to change the tag.
  14. DRAM cache is critical. Avoid DRAM-less SSD like the plague. DRAM-less SSD can be slower than a HDD which makes it pointless to get an SSD. A little trick I have found is just to google the SSD model and "DRAMless" and read up on various reviews to see if it mention the SSD is DRAMless. Some reviews have picture of the circuit board and you can even see the DRAM chip. Some just outright say stuff like "this DRAMless model is blablabla". Some will have spec sheets that will tell you if there's DRAM (and even the exact amount of DRAM). Alternatively, bottom-price SSD are typically DRAMless so don't buy the cheapest stuff (use price / GB). This is only a general guide though because there can be old models that are cheap just because they are old (and conversely newer DRAMless models that are not at the bottom of the price chart because they are new).
  15. You will need to run the community-build Unraid Nvidia to be able to use the Quadro for hardware transcoding with a Plex docker. As for sync with Google Drive - see post below on how to integrate rclone with Unraid.
  16. Don't know what foreign contents you have but Plex should be able to handle anything as long as there are entries on themoviedb, tvdb, imdb.
  17. What do you mean by "mounting shares"? That field in the VM template where you enter the /mnt/user/share and a mount name and then mount the mountname from within the OS? If so that's a known issue for quite some time. It is theoretically possible to tune that (according to Red Hat) but nobody has ever managed to do it so performance is terrible. You would have better performance mounting smb shares instead. The best performance with NVMe is to create custom smb config to access /mnt/cache/share (to by pass shfs). Alternatively mounting the NVMe with Unassigned Device will also bypass shfs and give you best performance.
  18. I do both. Related stuff is aggregated e.g. media remotes are on the same mergerfs mount, backup remotes are on another mergerfs mount and so on.
  19. watchmeexplode5 already provided some answers so I'll just add to some other points. The 400k object count limit (object = file + folder) is the hard limit per team drive. In practice, anything approaching about 150k objects will cause that particular teamdrive to be perceiveably slower (been there, done that). So keep that in mind e.g. you might want to split TV Shows to smaller libraries. In terms of metadata, what sort of metadata? Typically Plex db (which includes what I would call metadata), for example, is stored locally (and should be stored locally). You really don't want that stuff on the team drive because of the high latency. File attributes (also can be considered metadata) depends on the service itself but it's not part of the object limit. So back to the question, what other kind of metadata? I have unencrypted tdrive for family photos (among other things). All our mobile devices are automatically synced to this tdrive (1 folder per phone) and that is mounted on the server as well. So if I need a photo from a certain phone or if I need to push a file to a certain tablet, I can do it from the server. The main reason the tdrive is not encrypted is because the Android sync app doesn't support rclone encryption.
  20. I still would recommend you watch SpaceInvader One tutorial on Youtube just so you don't accidentally make silly mistake.
  21. It depends on the file system. If it's btrfs then adding drive to the pool will automatically run RAID-1 profile so no erasing. If it's xfs then the pool will need to be formatted to btrfs i.e. data will need to be erased.
  22. I would suggest to open the case and see whether the fans move first and foremost. Next is to make sure you are using the right fan port on the mobo. Most mobo have specific port for CPU fan (and extra CPU fan for push-pull), pump and case fans. Some mobo also allows you to change which sensor the fan curve is based on as well. Make sure the right fan is on the right sensor. The various settings you see are the fan curves. The sensor setting should be independent. If all things are set up correctly, the CPU load will only affect the fans plugged into the CPU port. So even if you load 100% CPU, only CPU fans will speed up. For case fans, unless your mobo has special sensors at the right location (or thermal sensor ports), you can set it at a fixed rpm that you can be comfortable with. Alternatively, change the sensor affecting the fan curve to CPU and they will drive up based on CPU load. I have all my in take run at 100% and exhaust based on CPU sensor (since the exhaust fan is right opposite the CPU cooler so it improves the removal of hot air from the cooler vicinity.
  23. If your Ubuntu storage is the generic kind (i.e. no encryption, raid and other funky stuff), you should be able to just connect the Ubuntu disks to the Unraid server and mount them as Unassigned Device and do the copy from the Unraid server. That is typically faster (e.g. you can do disk-to-disk parallel transfers as long as each disk only has 1 stream). If your Ubuntu server has RAID-based storage then safest is what you are planning i.e. just do the transfer over the network.
  24. You have better chance of getting an answer from the Radarr forum. I don't think that's an Unraid issue.
  25. Temp support is driver and kernel dependent. Basically if it works (or not work) then it is what it is. You can try 6.9.0 beta, which has 5.x kernel, to see if it helps. Set the fan curve in your mobo BIOS and then trust it to do its job. Not all hardware devices can be passed through. I remember the X570 onboard audio is in the "cannot" category. What mouse and keyboard are you using? Some wireless ones just don't like the libvirt virtual USB so the only solution is to pass through a USB controller. Highly unusual for wired input device to be a problem. Mobo RGB controll has never worked and unless someone puts in the effort to develop something, it will continue to be the case. Since it's cosmetic, don't expect anything from Limetech. You can only control it with the BIOS (i.e. prior to boot) and that's it.
×
×
  • Create New...