Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Try signing up from within BOINC. Or use a BOINC account manager such as BAM! (yes that's what it's called) to create account for you from their website. If you run multiple projects and/or on multiple clients BAM! helps simplify the add project process.
  2. That's one of the reasons why I used pipework instead of the various official methods with dockers. Pipework works in a transparent easy-to-understand way i.e. just like how you would set something up in your OS network settings.
  3. Your SSD is not even formatted so definitely wasn't mounted. So your /mnt/disks/Windows was in RAM. You need to first format the UD (you might need to install Unassigned Devices Plus plugin to be able to enable destructive mode in the UD settings to allow formatting - otherwise, the Format button is greyed out). Pick either xfs or btrfs depending on your needs. Then change the mount name to Windows and mount the formatted UD. Only then it will show up in /mnt/disks/Windows.
  4. maybe silly question but are you sure your NVMe is mounted at /mnt/disks/Windows? If it's not then your /mnt/disks/Windows would be in RAM which would cause issues. NVMe and UD are certainly not the problem by themselves. I have 2 VM's currently running out of vdisks stored on an NVMe UD.
  5. As it states clearly on the Cnet article, this is a Mac issue when it can't reconnect properly. So I guess you will need to ask Apple to fix it (or perhaps a Mac expert can opine).
  6. ok so the case is correct. First change your share to public just to test and eliminate any connection issue.
  7. Yes but not with the Unraid built-in VNC adapter (which has never worked for me concurrent with a GPU passed through). What you need is to use an alternative (free) remote desktop software. For example, Ubuntu has the screen-sharing feature which is actually just a VNC server so can be accessed with any VNC viewer app. For Mac, I have found NoMachine to work really well. NoMachine was my go-to cross-platform remote desktop software actually before I quit Mac VM altogether.
  8. I could have explained it clearer. The rclone upload job could be terminated silently (e.g. out of memory is the most frequent problem I have seen), in which case, the termination of upload script due to existing ongoing upload is actually an indication that something is amiss. This would be missed by the users if the script is run on a regular schedule during the initial massive upload. Most users don't check the upload log (and network stat etc) and just assume things are just going in the background.
  9. Then the "error" is expected. One of the upload is still running (as Kaizac said, 12Mbps is rather slow) so naturally the next run would stop. The whole upload control file is exactly for this scenario i.e. avoid running multiple uploads of the same file to the same source. You shouldn't be running the upload script on an hourly schedule with such a slow connection to be honest. At least don't run it on schedule untill everything has been uploaded.
  10. What are you trying to upload? Has your prior run managed to finish its upload before you start the 2nd one?
  11. That sounds to be like an innate instability somewhere in the system. BOINC and f@h put unusually high load on the hardware so any instability is more likely to manifest itself. For example, I can reliably put my server in a hard lockup by turning on Precision Boost + run BOINC / f@h / weirdly a very specific Adobe Lightroom job that barely loads up the CPU at all. I know it's due to an AMD tweak in PB on the newer BIOS to reduce voltage to reduce temp. Going back to an old BIOS and it just runs really hot but no more lockup. That's just an example though (obviously doesn't apply to your Intel system but it gives an idea). I'm guessing either heat related or voltage / current related.
  12. To both: if you have waited more than 8 hours and still no work, restarting the docker / app (if you use windows) may help. The server is indeed overwhelmed by the public responses, complicating the matter with a bug that can cause f@h to be stuck in a loop if it receives http error. Obviously, don't restart the docker all the time because you will worsen the matter by giving extra unnecessary load to the already overwhelmed servers. And of course, it's even better if you can run both BOINC and f@h at the same time. I recommend 2 BOINC projects: Rosetta and World Community Grid. Between the 2 of them, you should have plenty of work to do (not necessarily everything contributing to COVID-19 but certainly to other good medical causes). Rosetta has already participated in the COVID-19 research (and released news article of their contribution).. WCG has also released a statement that they are reviewing COVID-19 related projects to add.
  13. It's not deleting the VM. Deleting the vdisk file. Once vdisk file is deleted and you open the VM on the VM GUI, it will go back to creating a new vdisk mode. Take a screenshot of what you use for that.
  14. Take a screenshot of your Shares page on the Unraid GUI. PS: It's not about whether folder needs to be upper or lower case. It's about matching what's actually on your server.
  15. Try adding this to just after "append " in your syslinux vfio_iommu_type1.allow_unsafe_interrupts=1
  16. Start a new topic. Your problem may or may not be the same as the OP. Adding it to here will just confuse anyone trying to help.
  17. Stick to RAID-1 pool + 500 UD. You can't do that kind of nested RAID (not that it's even recommended). Also RAID-0 is highly NOT recommended to new users due to their inability to appreciate the risks involved. Simplicity is an underrated value.
  18. What are you using the GTX 960 for? What are you planning to use the RX 480 for?
  19. Don't use VM autoboot feature and use a bash script instead (use User Scripts plugin to schedule it at array start). That way you can have full flexibility to control the order of vm start (virsh start [VM name]) as well as how much time to wait (sleep [n]s) between boots.
  20. Your NVMe is not being stubbed otherwise it should show up in the Other PCI Devices section on the VM template and disappear from the SCSI Devices. Move your vfio-pci.ids section to just after append.
  21. Because people don't tend to switch between xml and GUI frequently. And even if they do need both then it's trivial to save the template first and then re-edit.
  22. Try installing an earlier version of the AMD drivers. I have seen several cases of newer AMD drivers not liking passed through GPU. Since 1809 worked, try a driver that was available in 2018. In terms of alternative cards, without knowing your hardware config, it's hard to give a recommendation. Even knowing your config, any recommendation is not a guarantee (e.g. not dissimilar to how your RX470 worked but then stopped working) so take that with a grain of salt.
  23. I don't think there's any tool that would do it automatically. If things that have command line then you can set up regular script to do regular backup.
×
×
  • Create New...