Leaderboard

Popular Content

Showing content with the highest reputation on 05/07/20 in all areas

  1. a little bird told me that they already were at beta8, huge changes coming....
    3 points
  2. This empirically is due to the binhex-preclear install but only indirectly. When you install that plugin it asks you for volume mounts to give to the underlying docker daemon for mounting inside the docker container. Host Path 6 is /boot/config/plugins/dynamix/dynamix.cfg. This *should* be a file, I'm assuming, but didn't exist on my unraid. When you give docker a path that doesn't exist it will automatically create a folder given that path. So when I installed binhex-preclear the docker daemon created the folder /boot/config/plugins/dynamix/dynamix.cfg/ If you make this folder and refresh your WebUI you will see that it generates php errors described above. The *easiest* fix is: 1. Uninstall the binhex-preclear plugin/app 2. remove that path from your dynamix plugins folder (or at the very least move it to another name that *doesn't* end in .cfg) * You can move the offending directory out of the way via the terminal with the command: * mv /boot/config/plugins/dynamix/dynamix.cfg /boot/config/plugins/dynamix/dynamix.cfg.broken * Or if you want to remove it * rmdir /boot/config/plugins/dynamix/dynamix.cfg 3. Now refresh the WebUI and you'll see the errors are gone. Now if you want to reinstall the binhex-preclear plugin you should first create that file that the plugin wants to pass through to the docker container. From the terminal run the command: touch /boot/config/plugins/dynamix/dynamix.cfg [EDIT] Squid pointed out that if you make changes within Settings -> Display Settings this will create the file /boot/config/plugins/dynamix/dynamix.cfg so there's no need to enter the terminal if you don't want to. Then proceed with the install as you normally would and things should operate just fine.
    3 points
  3. I've done a bunch of stuff this week. I'm not sure about being out of beta, but the code is starting to become more structured to a permanent standard which means we're getting closer to being out of beta. I think the app is pretty close to a point now where no more major changes will require a clean install or anything. There is one other thing I need to do yet and that is to move all the task queues to the database. That change may or may not yet require a clean install. Once that is done, I think I'll push unmanic out if beta. Today I'm testing Nvidia hardware encoding changes (not yet available via docker) so far so good.
    2 points
  4. Application Name: Unmanic - Library Optimiser Application Site: https://unmanic.app/ Docker Hub: https://hub.docker.com/r/josh5/unmanic/ Github: https://github.com/Unmanic/unmanic/ Documentation: https://docs.unmanic.app/ Discord: https://unmanic.app/discord Description: Unmanic is a simple tool for optimising your file library. You can use it to convert your files into a single, uniform format, manage file movements based on timestamps, or execute custom commands against a file based on its file size. Simply configure Unmanic pointing it at your library and let it automatically manage that library for you. Unmanic provides you with the following main functions: A scheduler built in to scan your whole library for files that do not conform to your configured presets. Files found with incorrect formats are then queued for conversion. A folder watchdog. When a video file is modified or a new file is added in your library, Unmanic is able to check that video against your configured video presets. Like the first function, if this video is not formatted correctly it is added to a queue for conversion. A handler to manage running multiple file manipulation tasks at a time. A Web UI to easily configure, manage and monitor the progress of your library optimisation. You choose how you want your library to be. Unmanic can be used to: Modify and/or trans-code video or audio files into a uniform format using FFmpeg Move files from one location to another after a configured period of time Execute FileBot to automatically batch rename files in your library Extract ZIP files (or compress files to .{zip,tar,etc}) Resize or auto-rotate images Correct missing data in music files Run any custom command against files matching a certain extension or above a configured file size The Docker container is currently based linuxserver.io Ubuntu focal image. Video Guide: Screenshots: (Dashboard) (File metrics) (installed plugins) Setup Guide: Unmanic Documentation - Installation on Unraid Thanks To/Credits Special thanks to: linuxserver.io (For their work with Docker base images): https://www.linuxserver.io/ Cpt. Chaz (For providing me with some updated graphics and his awesome work on YouTube): https://forums.unraid.net/profile/96222-cpt-chaz/
    1 point
  5. notification:generate John Doe hola
    1 point
  6. The issue is rather subtle so I’m not surprised if you find it difficult to grasp what is exactly going on. I have the cache setting on the shares on my system which hold all my TV and films set YES even though 99% of the time I am reading from them, so unless you have a specific reason on why you must set it to NO then this is the easiest solution. For the other solution, no you do not need to create a second user. What you will find when your array has been started is that your shares appear under the directories /mnt/user and /mnt/user0. The difference is the /mnt/user ones include files that are on your cache + data drives, while the /mnt/user0 ones just include the files that are on your data disks. The /mnt/user0 directory is really a hangover from the old way that mover used to be implemented to copy files from the cache disk to the array. It has since been replaced with a dedicated mover program which understands how to move the files to/from the cache disk without using it. Tom has dropped hints that he would like to get ride of /mnt/user0, but some people have indicated they do have uses for it; so it may or may not disappear in a future release.
    1 point
  7. I think the problem that a file has been created on a share which uses the cache disk, so the file is created on the cache disk. The file is then moved to the Media share, and rather surprisingly you find it is still on the cache disk in the Media share, even though the Media share is set to cache NO. This is actually expected behaviour and the reason is rather subtle. The caches setting on a share only affects the creation of new files and not files which already exist on a disk. Now as the file already exists on the cache disk and the cache disk is included in the Media share the cache setting does not apply so the file is moved into a directory in the Media share on the cache disk. There are 2 possible solutions: - Change the Media share to cache YES so the mover will move file to the data disks when it runs. - Move files to /mnt/user0/Media rather than /mnt/user/Media as that does not include the cache disk so the move is forced to do a copy & delete farther than just a rename.
    1 point
  8. It does, I'm using it now and it works well.
    1 point
  9. Hmm, not from Twitch, but if I can figure out how to share the twitch folder to UnRaid maybe I can change the docker's serverfiles parameter. I'm not really sure how to do that though since my Windows install is on a native NVMe drive being stubbed to the VM instead of using a Vdisk. Edit: Well, turns out the twitch button is just a quick way to download the files, it isn't actually server management capability. So I would still have to do it the manual way anyway. No need then really to link the server files since it will take just as long anyway.
    1 point
  10. Sure, but there is nothing novel I have done to expand on. I followed the @SpaceInvaderOne guides (noting what I have posted above) almost to the letter for a setup with my own domain name and using Cloudflare dns. The only issue I had is when I used the Cloudflare proxy service - which I ended up turning off - but that had nothing to do with Pihole. I also used LAN IP addresses in my config rather than local dns (e.g. x.x.x.x rather than mariadb.local.tld) so that resolution was not required. Router provides DHCP and reserves IP addys. I assume you had Nextcloud working perfectly using the Letsencrypt docker before you installed Pihole?
    1 point
  11. @tkohhh The key reason I made my own was actually because the official repo requires root to install itself. My container does not, and therefore it is compatible with locked down environments like OpenShift where security is important. Additionally the official repo is bloated for what it provides. The official repo seems to install itself by first installing Ansible inside a Debian/RedHat/CentOS container, then install Splunk using Ansible. This seemed odd to me as I'm able to fully download and install Splunk in about 5 lines of bash. The official container also has no optimisations for small-footprint use. My container has several tweaks that make a massive difference to footprint. For example, the official repo will store Splunk's 20-ish internal log files in /opt/splunk/var/log/splunk/*, it will allow them each to grow to 25Mb before rolling them. It will then store 5 additional rolled files (e.g. splunkd.log.1, splunkd.log.2, splunkd.log.n) consuming a not-insignificant amount of disk space. My container will only store one of each file, in addition to logging dramatically less noise to them thanks to an optimised internal log config that I wrote. Additionally, I have also written config that disabled some features that are enabled by default but only used in enterprise environments, such as several internal automated Distributed Service Monitoring dashboards+reports in the Health Monitoring Console. Splunk's official repo doesn't even support distributed deployments so it's odd that they don't disable this stuff too. This container has no issues running in distributed mode (and it is in several instances) if that is required, but for the majority of my Docker users I suspect they're running in standalone and don't need this resource wastage. So the TL;DR is that mine has a much smaller storage footprint, requires significantly less resources to run, and is capable of never using the root user/running as an arbitrary user.
    1 point
  12. I still cant actually create a folder. I tried updating and importing old folders, and i also tried creating new folders, but neither seem to work. I am free to teamviewer generally between the hours of 8:30 pm UTC to 5 am UTC. But i might be able to do something like 11 am UTC for half an hour but that would be day dependant and i probably wouldn't know until the day. Cheers
    1 point
  13. It's ok i've got it, 1 file in each of the problem series had been corrupted from a data rebuild and a lot of moving above recently, and it turns out the file/folder parser doesnt fail gracefully in these circumstances. ssh in and rm the problem files, and we are back running fine!
    1 point
  14. I have spent all morning trying to get this working to no avail. I am glad it's not just me!
    1 point
  15. there already is a tmm gui version, this used the same software just via commandline. there is one catch, if you use radarr/sonarr and your files get updated, tmm will not rescrape the data for the newer file, you have to do that manually with the gui version as it already thinks the movie is scraped. its a limitation to tmm itself.
    1 point
  16. I'll work on getting it into the CA repo soon. Probably by next week. A few things though: I didn't end up implementing fail2ban yet, but still plan to. Caddy 2 is now in GA so I'll be working that out too; in a separate build.
    1 point
  17. Hello. I updated this morning to 2020.05.05a and now I am seeing this in the logs: May 6 10:32:14 legion preclear_disk[29247]: error encountered, exiting... May 6 10:34:49 legion preclear_disk[32017]: error encountered, exiting... May 6 10:35:45 legion preclear_disk[660]: error encountered, exiting... Since I had performed an update, I also tried completely removing the plugin and reinstalling it, but I am getting the same result.
    1 point
  18. Have you looked to see if it's enabled in the bios?
    1 point
  19. FYI, as of 6.8.2 Unraid should work with wildcard certs. See:
    1 point
  20. Hello, New in the Unraid universe, I spent a lot of years with 2 Xpenology-based NAS that suited me perfectly but one day I made the mistake of launching an update incompatible with the version installed on a NAS. Luckily it was not the main NAS and there was nothing unrecoverable on it. So I tested in turn FreeNas, OpenMediaVault and Unraid.And I must say Unraid seduced me and I decided to keep it. I'm testing my HDDs with the Preclear disk plugin before integrating them in the array. It's a really powerful tool. In these times of confinement I have time to try to discover all the facets of this software. Reading the forum has already helped me a lot as well as the videos of Spaceinvader One. Thanks to him!
    1 point
  21. Yes, I have it working great from following SpaceinvaderOne's video tutorial: https://www.youtube.com/watch?v=yD6FJEYRpsY
    1 point
  22. Hello all, I am new to the forum and I am investigating creating a NAS system by reusing some old hardware and buying some new stuff. I am in the UK and we don't have anything like micro center, there is Currys but they aren't always the best for price. Old Kit, CPU: Intel Xeon x5670 or i7 920 Ram: 12gb (6x2gb) Patriot Viper DDR3 1600 GPU: Sapphire Radeon hd 5850 2gb MoBo: Gigabyte GA-X58A-UD7 PSU: EVGA GQ 750 SSD: Kingston A400 120gb Case: Coolermaster MasterBox 5 (with some drive caddies from my other cases) External drives: 1x WD 4tb 1x Samsung 2tb New Kit, HDD's: Unraid 4x 4tb drives with 2 of these for parity. I was expecting to need 6x 3tb RAIDZ2 drives for Freenas. My current setup is 3x desktops (me, the wife, my son) 2x smart tv's, 2x laptops one belongs to the wife and one is used in the workshop. Currently the data is split between the desktops with the wifes one of acting as a Plex and photo server. I have a backup on this desktop onto one of the external drives and my desktop backs up onto the other external drive but it is a bit of a pain checking that these have run and I have duplicate information on the systems. I want to bring this all together into one system so it is easier to maintain and accessible from all the machines with some restrictions for some users. I use Google cloud to back up my photos in addition to the local backup but it is expensive to increase the storage capacity. I currently have about 7tb of data over the machines but some of this is duplicated, my amount of data doesn't increase very rapidly and the 1tb overhead will last probably 8-12 months. I have a couple of questions that will help with my decision as to which way to go. Do you think the hardware above is ok for running Unraid? Can I set up the system to sleep over night and in the day when there is minimal usage? How do the two systems compare in relation to drive failure? How does windows see the drives, can I set it up as a single drive and have folders within this that have some restrictions? How is the data written to the drives for example if I put a folder with 4 documents inside are these split across the drives? How easy is it to expand the pool? How easy is it to handle a drive failure? Can I run a backup to an external drive over USB? What should I do with the SSD (I was going to install Freenas on it) could I use is as a cash drive or should I get another one for raid 1 before I do this? How does a cash disk affect the system? What is the recommended way to get disks, should I but some NAS drives (I am looking at Ironwolf drives) or get them from enclosures? Can I run 2x USB sticks and set one as a backup for the other? Sorry for the mass of questions, I am currently thinking that Unraid might be better because I can expand the pool and that I can get some of my information back even if there is a total failure of the system. Thanks in advance.
    1 point
  23. Yes, it's actually running flawlessly since the latest version of the Docker container. I've been in contact with the developer who provided it and with some of his effort, my debugging notes and the latest MediathekView update, it started working. I've been meaning to provide an iteration for Unraid, but been too busy to look into it. Here's how to get it working: Add container from https://hub.docker.com/r/conrad784/mediathekview-webinterface/ Choose a port, config and download location. Add a variable USER_ID = 99, GROUP_ID=100 and UMASK= 0000 That's it. The VNC Web GUI doesn't work too well on mobile (searching with a soft keyboard is a pain) but works perfectly on Desktop. If you are in Austria/use an Austrian VPN and want content from ORF, note the config changes required in the MediathekView forum to the ffmpeg/VLC parameters.
    1 point
  24. NAS and Enterprise drives typically come with additional features such as: Longer warranty Better vibration / drop protection (Free / discounted) data recovery service Better rating for continuous (24/7) operation Let's use ZFS as an example. It is RAID, which stripes data across multiple drives. That means: All drives have to be spun up so a RAID system almost never spin down. That makes (4) rather important. If more drives fail than the number of parity, ALL data is lost. That makes (3) critical if any data at all is to be recovered. It usually deploys in enterprise / corporate envi which (a) staff don't care as much about handling things with care and (b) server racks are usually terribly designed with regards to vibration absorption and vibration is bad for moving parts. That makes (2) important. Companies usually depreciate their assets and once fully depreciated, they will replace the assets, usually with a small accounting profit if the assets are resold. This depreciation is typically done over 5 years, 10 years, etc. Wonder why enterprise drives tend to have 5-year warranty? That's why. Compare that to Unraid, which is NOT RAID. There's no striping, each disk has its own file system. So: Drives can be spun down when not in used. Making (4) not as important. In fact, there's an argument AGAINST NAS drives with Unraid precisely because they are rated for 24/7 operation, not the up-and-down usage pattern of Unraid. Moving parts rated for continuous operation don't necessarily take kindly to being switched on-and-off regularly. If more drives fail than the number of parity, only the failed data drives will lose data. That makes (3) less important because usually some data is recoverable. Depends on the data, "some" may be good enough. Most Unraid users deploy their Unraid servers at home in consumer cases. That makes (2) less important. Consumer cases usually house fewer drives than, for example, a 4U rack-mount case (or a storinator!) so less overall vibration. A lot of consumer cases also have vibration mitigation built-in because vibration means noise which is rather not appreciated at home. Also if you own the stuff, you tend to be more careful than that IT guy who just broke up with his lady. Home users don't replace storage at fix schedule but rather only when it stops working, which typically is WAAAAAAAAYYYYYY after the warranty has expired. That makes (1) less important. Now all those points could be thrown out of the window if consumer drives are terrible and fail significantly more often than their enterprise brothers (including NAS types). Fortunately, we have Backblaze to the rescue with its annual HDD failure analysis. And in short, enterprise drives ARE better, but not by much (like 0.05% - 0.1% diff). So considering you will be paying at least 20% more for NAS drives that at best are 0.1% less likely to fail, for features that are not as important to Unraid. If money is not a concerned (e.g. Linus Sebastian) then OF COURSE go for the NAS and the enterprise drives because they ARE better. But when value is important (e.g. the rest of us), I'd say the benefits don't justify the cost.
    1 point
  25. Setting the permissions for root access seems to have fixed the problems I was having with krusader opening everytime as if its the first time each time. Can now save bookmarks etc, and the .config and .local files are there now. When I left the permissions as the default those files didn't exist as far as I can tell.
    1 point
  26. I use the Unassigned Devices plugin. There is a sample script included in the first couple of posts in the UD support thread. That script automatically backs up the "Photos" share when the device is plugged into the server. I have modified it for my purposes to back up several shares from my unRaid server when I plug in my 14TB USB drive. If the data you want to backup from the array is larger than the capacity of a single drive, you could have a script for one USB Unassigned Device that backs up shares A, B and C and another script for another USB drive that backs up shares D, E and F. For backup of the three desktop/laptop PCs in my home, I run Acronis True Image (there are also some free backup solutions you could use - like Macrium Reflect) and have a Backups share on the server to which they point as the destination. Each PC has its own folder within the Backup share. Others have opted for a separate share for each machine to be backed up with backup user credentials for each of those shares.
    1 point
  27. [SOLVED - upon further reading I have discovered I need to set the user to root] I’ve been trying to create the link to the flash drive using this video, but I get an error saying it can’t connect. Other links are working great. Is it a permissions problem. Do I need to set this docket as ‘privalidged’?
    1 point
  28. You should take that post by testdasi as a warning and not attempt to make any changes to your router without knowing exactly what to do. See this previous post in this same thread for more information:
    1 point
  29. You have to purposely set things up with the Virgin Media router to expose your network to the Internet so unless you have done something, your server isn't visible by default.
    1 point
  30. The server should never be exposed directly to the internet as it is not hardened for that. If you want to access it remotely then it is recommended that you use the built-in WireGuard VPN facility. As far as Windows is concerned Unraid is just a network device exposing shares. There is no concept of a single sign on but Windows can remember passwords. In fact with Windows one of its quirks is that you cannot have a single user signed on twice at the same time to the same server with different passwords. If you have any problems in connecting from Windows then ask in the forum as Microsoft keep changing things at the windows end via their updates which can cause unexpected behavior Any drive plugged into the server at the time you start the array counts regardless of whether Unraid is using it or not. Drives plugged in after array start do not count but it is convenient if your license is of a tier that allows you to leave such drives plugged in. There is no (currently anyway) any built-in backup capability, but running your own script to do this is easy enough via the User Scripts plugin. There are also a number of docker based solutions that many user use. Unraid identifies drives via their serial number. so if this is not easily visible many users attach a label giving the last few digits of the serial. The one that is deemed essential is the Community Applications (CA) plugin as this adds App store type capability that can be used for finding and installing any further plugins and/or docker containers. Unassigned Devices (UD) will help with managing the USB drives you mentioned Fix Common Problems (FCP) will help identify mistakes that are frequently made in setup. User Scripts provides an easy way to run scripts either on demand or on a schedule. Their are plenty of other addons available and many users will have their own views of what they consider essential. If in doubt ask in the forums.
    1 point
  31. Hi all, I have ordered my 3x4tb drives. I intend to add 2 more drives probably WD red pro drives and then use them as the parity drives. My drives should be arriving tomorrow and I have gone through and started the setup of Unraid without the array. I have come up with a few questions and I hope someone will be kind enough to point me in the direction of the answer. The first is about security and passwords, I have setup some user accounts each with a password on the server, obviously these don't have any shares associated with them yet because there is no array. I have also setup a password for the root account. Is the server visible to the internet and with the passwords set is it secure? I don't have anything like a firewall other than what is in the Virgin Media router. I don't currently plan on having access to the server remotely at the moment and only on computers on the LAN will have access. I haven't got to this point yet but how do I access the shares on windows 10, I haven't seen anything on the wikki about this. do i just map to the shares in win 10? If I do this how do the passwords work for the users which have access to these shares? Is there a single sign on/ can windows save the logon information? How do I setup backup to an external USB hard drive, and does this drive count towards the licence? Finally is there a way to identify the drives in the array so if I get a failure I know which one it is, I was thinking or adding a disk and marking it, shutting down, adding a second disk and marking it and so on? Are there any recommended addons I should install?
    1 point
  32. A 2-drive array (1 data 1 parity) behaves like a RAID-1 - it's the beauty of math. It doesn't mean having a 2-drive array is NOT having parity. You should not try to have more drives just for the sake of "having some parity". Parity is there to help you when a drive fails. It's better to have a lower probability of a drive failure so you don't need to use parity to begin with. Backblaze also reported that newer larger capacity drives seem to have better reliability than older smaller capacity drives. Start small with big drives will also help with future expandability. If you start with 4x4TB now, when you next expand, you can only get 4TB or smaller drives. If you get a larger drive, it will have to be put in the parity slot (remember: no data drive can be larger than the parity drive). SATA ports are limited. HDD mount points are limited. 4TB will eventually go way of the Neanderthals.
    1 point
  33. With 1 parity disk assigned to parity1, and one data disk, yes, the operation is pretty much a RAID1. Any other configuration reverts to the general parity disk calculations, with the associated speed penalties. Disks can be configured to spin down when not being actively accessed. However, with only 2 disks and no cache drive defined, any docker container or VM services will keep the array spun up constantly.
    1 point
  34. For PC parts, don't even touch Currys with your toe. There are many reputable PC parts dealers such as Scan, More Computer, CCL, just to name a few. Amazon is also fine (just don't buy HDD from them due to their environmental friendly packaging that lacks padding). With regards to HDD, 4TB and 8TB are basically the same in terms of price / GB so you should get 2x 8TB instead 4x 4TB. HDD fails in probabilistic manner so the more drives you have, the more likely you will have a failed one. Now your questions: 1. Do you think the hardware above is ok for running Unraid? As a NAS - sure. I have had Unraid running on older stuff than i7 920. 2. Can I set up the system to sleep over night and in the day when there is minimal usage? Can't send it to sleep - I have never managed to do that. Can script to shutdown at a certain time. Waking it up, however, depends on hardware support and your skill level (to set up Wake-On-LAN for example) 3. How do the two systems compare in relation to drive failure? FreeNAS is a RAID architecture which has 1 fundamental issue - if you have more failed drives than the number of parity (e.g. 3 failed drive on a Z2, which has 2 parity drives), you will lose ALL your data. Unraid, in contrast, if you have more failed drives than the number of parity, you will only lose data on the failed drive (i.e. whatever is saved on the good drives will still be good). 4. How does windows see the drives, can I set it up as a single drive and have folders within this that have some restrictions? via SMB i.e. network share. Each share is a folder that spreads across the drives (or not, depending on how you set things up) and has user access restrictions. 5. How is the data written to the drives for example if I put a folder with 4 documents inside are these split across the drives? It all depends on your split level settings, allocation method and include/exclude disk. It could be 1 file per drive or 2 on 2 drive each or 4 on 1 drive. 6. How easy is it to expand the pool? Very easy. There's a wiki for that. 7. How easy is it to handle a drive failure? There's a wiki for that. 8. Can I run a backup to an external drive over USB? Yep. 9. What should I do with the SSD (I was going to install Freenas on it) could I use is as a cash drive or should I get another one for raid 1 before I do this? Cache drive is fine. You may not even need it for cache if you don't run VM / docker. Modern drives tend to be limited by the network speed. 10. How does a cash disk affect the system? It depends. If you have docker / VM (docker for Unraid is like an app for your phone) then you should have a cache drive for the docker image and appdata. Without cache, things can get very slow. As a pure NAS (i.e. just a network folder for people to save stuff on - without docker / VM), a cache drive will improve write speed but only up to the capacity of the cache drive (then you have to run mover to move data from cache to array, either as a schedule or manually). For most home networks, the network speed is the bottleneck and not the array write speed so you won't be able to utilise the faster write speed anyway; hence, there's no need for a cache drive. 11. What is the recommended way to get disks, should I but some NAS drives (I am looking at Ironwolf drives) or get them from enclosures? For Unraid, just get the cheapest you can get for the desired capacity from a reputable dealer (except Amazon due to shipping issue I said above). There is no need to get "NAS" or "Enterprise" or anything like that. Toshiba 8TB HDD are quite cheap right now. 12. Can I run 2x USB sticks and set one as a backup for the other? No. You can manually backup the USB stick (or automate it using plugin / script) to cache / array. However, you can't have 2 USB sticks as live backup of each other. Your license is based on the GUID of the USB stick so if your stick fails, you need to follow to stick replacement procedure as outlined by Limetech.
    1 point
  35. Just a page back: https://forums.unraid.net/topic/71764-support-binhex-krusader/page/19/?tab=comments#comment-805263
    1 point
  36. I am going to assume that /UNRAID points to your Flash/Boot Drive. You will probably have to set Krusader to run as 'root' rather than 'nobody'. You do this as shown below: It would be a good practice to always have Krusader 'stopped' except when you are actually using it. (Anyone can access it using any browser when it is running if they know how!)
    1 point
  37. +1 it might help others in the future when doing the same thang! thx.
    1 point
  38. I did this recently when toying around. Think you just click on the vm 'name', this will give you a dropdown of the disks and iso images contained in the vm, then click on the vdisk size entry itself, in your case the 40G figure, and voila you'll be given an edit option to change the size. Not sure if thats the 'official' way to extend the size but it certainlty worked.
    1 point