Auggie

Members
  • Posts

    387
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Auggie

  1. From what I could gather from Googling and personal testing, there is no easy way to change the VNC resolution (no GPU-passthrough setup) when using UnRAID's VNC Remote. In addition to the glaring problem of the missing cursor with UnRAID's VNC Remote (exhibited with all popular browsers under both Windows and OS X), I ended up using Microsoft Remote Desktop which requires logging into Ubuntu using the Xorg display server (drop-down option available on the login screen). To restore a presentable GUI under Xorg, I installed "Tweak" which provides Wayland interface themes and icons.
  2. Well, I just tried v6.5 VNC Remote for my Ubuntu 17.10 under two OS's and four browsers: 1) Win10/Edge = missing cursor 2) Win10/Chrome = big black square cursor, the size of typical icons 3) Win10/Firefox = missing cursor 4) OS X 10.13.4/Safari = missing cursor And it still doesn't address the inability to effectively change the screen resolution. I'm sticking with MS RDP/Xorg for now after installing an interface utility that has at least allowed me to "restore" Wayland themes, icons, and other GUI functionality to the Xorg display server.
  3. I tried UnRAID's built-in browser-based VNC setup and it has significant issues on OS X (cursor not synced upon initial access and upon waking VM from sleep), limited resolution (changing display size in Ubuntu freezes startup/restart unless I switch the graphics mode to vmvga, but that introduces performance issues and other errors). I also tried Microsoft RDP by logging in with Xorg but it, too, has significant errors: constant "problem encountered" dialogs asking to submit to developers, generic application icons, and now the File Browser no longer seeming to launch. So I'm looking for a stable, trouble-free VNC setup that allows higher resolutions than the UnRAIDs built-in VNC.
  4. What I determined was that the auto-mount switch of fstab does not always mount the shares after startup/reboot. I have 3 shares I want to auto-mount, and across startups/reboots, which gets auto-mounted appears to be random. Sometimes all get mounted, sometimes only a subset gets mounted, and sometimes none of them get mounted. I tried a crontab entry using @reboot but this too, does not guarantee success. I wanted to automate several processes for my headless Ubuntu server which depend on user shares being mounted, but at this time, I can't seem to find a fully automated solution...
  5. Not being vague unless you are not familiar with YAMJ. Since there are quite a few moving parts with YAMJ version 3, all of which I myself don't fully understand the inter-communication aspects between all the components, I can't begin to describe what exactly are the requirements. Hence, just a generalization that multiple applications need to communicate to one another or run under a runtime environment (e.g. Java).
  6. I installed an SSD to use for a VM but mistakenly added it to the cache pool after discovering that I would lose total capacity. After stumbling about to remove it from the cache pool successfully (I ended up having to reformat my original cache SSD), it doesn't appear that the Unassigned Devices is able to mount it; a necessary requirement in order to use this for my VM outside of the array. I'm able to add it the array (at least, it lets me add it but I never commit) and to the cache pool. So is the issue of not being able to mount it via this plugin a problem with the plugin, a problem arising from my mistaken step of initialing adding it the cache pool, or something with UnRAID (6.4.1) itself? UPDATE: I figured it out. I first had to enable Destructive Mode in the Unassigned Devices settings page. This enabled the FORMAT button, which after formatting, the Auto-Mount feature became enabled.
  7. The lack of responses reflects either there are no easy answers or those in the "know" are too busy to assist at this time or not available. I've now abandoned using Dockers for my purpose and pursuing a VM approach...
  8. FYI, Ubuntu Noob alert! Got the latest Ubuntu desktop headless VM installed and running, but can't for the life of me seem to figure out how to access the host's User shares (preferably through NFS; not CIFS/SMB) The VM XML has the typical code that I entered during VM setup: <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/MyShare'/> <target dir='MyShare'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </filesystem> But I just can't seem to figure out how to access it within the Ubuntu VM and a Google-fest has provided all kinds of options through fstab and mount but very little specifically for UnRAID Ubuntu VM that worked for me. In the /etc/fstab file, I've tried: MyShare /mnt/user/MyShare 9p trans=virtio,version=9p2000.L,auto,nobootwait,rw,_netdev 0 0 ...and: MyShare /mnt/user/MyShare 9p trans=virtio,version=9p2000.L,auto,nobootwait,rw 0 0 Th latter without _netdev would result in a boot error, though the mount point is created albeit an empty folder -- does that mean it actually mounted the user share but because of permissions the mounted share is empty? (It's a public share containing media files with no security enabled). Help?
  9. I am attempting to install a Java-based CLI that uses a MySQL database and Jetty web server. So how do I install all four different applications into UnRAIDS Docker Container(s) so they can all interact with one another? From my limited understanding, different Docker Containers do not have direct access to one another other than perhaps through an IP:port address, so at a minimum, the Java app and the Java Runtime Environment would appear to require them to be installed in the same Docker Container. But could I install all in one Docker Container? I got as far as installing a MySQL Docker Container via Community Applications plugin, created my needed database and user. But now I'm stuck on how to proceed with the remaining applications. FYI, I'm attempting to install YAMJ3 on my Media Server UnRAID box.
  10. I ended up copying the entire contents of the flash drive, installing a freshly downloaded clean 6.3.5, booted it to verify UPS connectivity (it worked), then copied the old config folder over. All now is good-to-go! Though I wish there was a less destructive method...
  11. Through the years I've successively upgraded my initial v4.7 unRAID install all the way to its current v6.3.5 version. Today, I only just noticed that v6 now natively supports APC UPSs while the original third-party APC UPS plugin that installed way back in under v4 or v5 (can't remember) was no longer fully functional, so I decided to finally remove this discontinued plugin. After doing so, however, the builtin APC UPS support lost communications with the UPS and rebooting unRAID does not resolve the issue. Do I need to copy over a completely fresh 6.3.5? Or is there a less drastic method of restoring builtin APC UPS communication support?
  12. UPDATE: Okay, I discovered I have three cache-drive only user shares: "System", which DOES contain the libvirt.img file, and the other two user shares I mentioned previously. VM Manager was active (running) so I just disabled VMs. I'm going to stop there (not delete the libvirt.img file) and see if that in-and-of-itself prevents the "ISO" directory from being created... And if this does stop this folder being created, then I believe it's still a bug because this "ISO" folder was created when I did have an active VM installed when there doesn't appear to be any reason for it to exist.
  13. BTW, I'm not physically located where this media server resides (half-way across the country) so I'm not actively trying to track down this bug. It's only when I remote in to do some file movements when I notice the notorious "ISO" directory has re-appeared and do my duty to delete it. But it was nigh time for me to query the forums about this issue to see how common-place (or not) this is. I last was on-site and testing out VM last summer, so it's been a long time and I'm not too familiar with all the steps and necessary files, but I couldn't find "libvirt.img" within the ISO directory. I do have a "Domains" share that contains an OS X image to be the installer of my next VM test, but that's the only file that exists on that share. Also, this share is set to only reside on the cache drive. So in reality, I have to user shares on my cache drive, "Domains" and "ISO Library Share", and both are set to reside only the cache drive.
  14. Normally, I would agree. But I can't think of any other possible reason why a mysterious empty directory named "ISO" always gets created. Absolutely no reason. This unRAID system is purely a media server and there are no directories named with just the word "ISO" anywhere. So I believe this esoteric bug only manifests itself in the way my system is currently configured: A VM environment was established but then deleted with no other changes to settings and a supporting directory of ISO files allowed to remain intact. Is it possible that there is some 'default' hard-coded directory name of "ISO" that would be used as the default name for supporting ISO images? That's also a possibility. One way I can test this is to rename my supporting directory to something without "ISO" in the title but still with multiple words, and see what happens. Anyway, I never had this problem before I started testing and creating a VM.
  15. I no longer have any VMs; they've all been deleted as this was my first foray into exploring them (hadn't known that at some version of unRAID 6, VM was thrown in). I initially installed Win 10 and after trying it out, scrapped it and was about to try some flavor of OS X when I decided to put this little project on hold for awhile; I still kept the ISO Library Share intact on the cache drive containing installation images. But in the meantime, I've been trying to keep this other "ISO" folder from popping up. Again, I think its a bug with the daily 'mover' routine in which it sees the directory on the cache drive and even though it's marked to only reside on the cache drive, the 'mover' routine still attempts to copy it. A "sub-bug" within it reveals that it only parses the first "word" of the directory name, hence this is why it creates the directory "ISO" instead of "ISO Library Share" on the data drive(s).
  16. Yea, sure looks like that to me. But I wanted to solicit opinions in case I missed a setting in unRAID that could be causing this anomaly.
  17. Yep. Deleted it through the web GUI, but it reappears again some time later. I understand if there is any directory named "ISO" appearing at the root level of any drive a share will be created. After deleting the "ISO" directory through the web GUI, I manually checked every data drive to confirm that no "ISO" directory or file exists. But like clockwork, a new "ISO" directory eventually gets created; usually on the data drive with the most free space (but not always).
  18. What exactly am I looking for here? I've set this path long ago and it looks good to me: Default ISO Storage Path = /mnt/user/ISO Library Share This path does and has always existed on the cache drive only, and as I said earlier, contains 45GB of images. How does this explain why unRAID continuously creates yet a different public user share on one of the data drives and names it "ISO" and is always empty?
  19. This issue appears to have surfaced after creating a private user share to exist only on my cache drive and would hold ISO images to support my VM. I named this share "ISO Library Share". But then mysteriously, a public share named "ISO" would be created on one of the data drives, and every time I delete it, either from accessing the disk share directly and deleting using the remote OS (e.g. OS X Finder in my case) or through the Shares tab of unRAID's control panel, this mysterious "ISO" share would recreate itself somewhere else later on (I never timed when this mystery share would reappear but it seems like it's at least the next day). Annoying and I can't figure out why. Does it have something to do with the unRAIDs cache "mover" system that moves data from cache drives to data drives? If so, I believe I've checked off everything to indicate that the "ISO Library Share" is not to be copied/moved to data drives. Even then, the mystery share is not the same name (though it is a subset of it). Also, this mystery share is always empty; the "ISO Library Share" has 45GB worth of disk images contained within it. Is anyone else experiencing this phenomenon? Is this a bug? FYI, I currently have the one VM deleted as I was just testing the functionality of this feature. Eventually, I will have one or two VM's established but haven't yet decided which OS's they will be.
  20. I've had this little board for about 4 years and just recently moved it into a new Norco RPC-4224 and want to control the massive 120mm fans and 80mm exhaust fans (new PWM Noctua's) like I do with my first RPC-4224 setup. In its previous case I let the mobo handle fan control via BIOS but now I want it controlled with the same unRAID script (http://lime-technology.com/forum/index.php?topic=5548.105) that I do with my media server, which is based on drive temperatures. It has certainly been done before with this board (http://lime-technology.com/forum/index.php?topic=11310.0;nowap) as it's listed as having two PWM fan headers, but when I run pwmconfig it finds no PWM modules: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed. When I relocated the board into the new case, I updated it's original factory installed BIOS to the latest R1.2b. This BIOS allows four different automated fan settings (full power, performance, balanced, power saving), but it seems it doesn't allow external control of the fans under any of these selections. Can any X7SPA board users with the latest BIOS verify they can't control PWM fans via script?
  21. I'll look into those. There are a couple on eBay presently: a 4U 48-bay used unit and a 9U 50-bay new unit. The 9U version is interesting in that all bays are directly accessible through the front face, though it is an extremely tall unit at 16".
  22. Thanks for this! Though, that's mighty expensive just for the oldest v4.5 pod they sell, at $1,400 for the backbone version (includes SATA cards, cables, and more importantly, the backplanes; really required since these are custom-made for this case). And unless unRAID plans to offer a 45-data drive version any time, it's not a viable solution for me since I really only need a 30-drive case. 45Drives does offer a 30-drive solution, but its a complete system sans drives and costs $3,100. It doesn't appear they offer just the bare case or a "backbone" version without motherboard and accompanying acoutremonts.
  23. I measured the available open areas and in my primary server with micro-ATX board there's only enough room for two drives placed parallel along the main axis of the case on either side of the mobo. That is sufficient for my needs as that would cover my 4 new data drives, bringing me up to 28 data drives. I'm hoping the 120mm fan wall will provide enough cooling for those drives as they will be situated right behind them. I've already placed the parity drive and cache drive on the aforementioned PCI bracket (the case was designed before Norco introduced the OS Drive Bracket); I won't be implementing a second parity drive as this is strictly a media and VM server. My newly assembled second 4224 system uses a mini-ATX board and there is enough room on the PCI side of the case for a 5-drive block that I plan to put together with an aftermarket drive cage with integrated 120mm fan (whenever I max out the 24 slots). As this server is my backup server but also is the recipient of replaced older drives from my main server, it has a higher probability of failures due to high power-on hours and age of the drives (it just experienced SMART errors with 2 drives almost simultaneously) I plan to use 2 parity drives along with at least one SSD cache drive (installed on the OS Drive Bracket).
  24. My RPC-4224 has been full-up with 24 drives since I put it together in 2013. IIRC, at the time, unRAID v4.x would only support 24 drives total, or at least 24 data drives. Over the years as I've purchased larger capacity drives, I simply moved the replaced lower capacity drives to another unRAID server. Recently, to my surprise, I discovered that v6.2.x now supports 28 data drives alone, in addition to 2 parity drives and 24 cache drives. So I'm curious: what are those who've maxed out a 4224 doing to expand beyond it's 24 drive slots to take advantage of unRAID's higher drive count support? External drive enclosures? Or loosely laid out alongside the motherboard in the back of the case? I've already moved my parity drive to a PCI bracket that holds both a 3.5" and 2.5" drive (to which I installed a spare 60GB SSD drive for VMs), then added a 90mm fan that blows directly onto the HDD to keep it cool (it consistently shows several degrees Celsius cooler than those in the regular drive slots). I'm contemplating one of those 30-drive capacity cases from 45drives, but I'm not sure if they sell them bare, and I don't like losing the ability to quickly replace data drives as I upgrade them.