Leaderboard

Popular Content

Showing content with the highest reputation on 10/08/19 in all areas

  1. Add it after the AppID. This should be a simple fix i will look into it after work. EDIT: Sure that this is not a bug in the game itself and they patch it in the next few days? I've downloaded the stable version and it works without a flaw, then i deleted the whole folder and the docker and installed the latest_experimental and it's the same as in your screenshot. Looked even if the folder structure itself is different but it's not it's exactly the same, even the missing steamclient.so is in the main directory... Is this the correct term for the beta build: '-beta latest_experimental' or is '-beta' engouh? EDIT2:Fixed the docker, please klick 'Check for Updates' on the Docker screen in Unraid and update the servers. The latest experimentail build now runs fine.
    2 points
  2. Get your Diagnostics file Tools >>> Diagnostics and attach them to a new post.
    1 point
  3. To be clear - that's the latest version of the Plugin. When the plugin is installed it downloads the latest version of rclone i.e. to ensure you are on the latest version of rclone you have to uninstall and then reinstall the plugin.
    1 point
  4. Not really a miracle, just means the emulated disk has some corruption, hence why I posted:
    1 point
  5. Not that I have a DVD/BD attached to the server, but it should be with the = https://forums.unraid.net/topic/57181-real-docker-faq/page/2/#comment-566100
    1 point
  6. You are too awesome, thanks @ich777. Yeah, I only spent about 15min on A18 yesterday. Now that the docker has been updated I can roll in a few more hrs to really get a feel for it, but good vibes so far ✌️
    1 point
  7. Adding more than one ethernet sometimes breaks the acpi layout, you have to make sure the graphics location is under gfx0.
    1 point
  8. Stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices, re-enable Docker/VMs if needed, start array.
    1 point
  9. I switched to this plugin this past weekend. I used to tar my backups manually anyways to it's nice to have it integrated. Question though, can we get an option where the dockers are updated and restarted before the verification. I just like to minimize downtime for my services as much as possible. Thanks
    1 point
  10. You can have multiple disks in the cache to give redundancy. You can use the CA Backup plugin to backup files from the cache to the array.
    1 point
  11. If I have read the diagnostics correctly the the ata1 and ata6 devices correspond to the parity1 and parity2 drives - not disks 1 and 6. you are correct to say that this type of error is a connection issue typically relating to the SATA connection to the drive in question. Resets of this type have the symptom of slowing down disk I/O due to the delays introduced as the drive(s) continually reset. To identify the drives you need to start by searching the syslog for the ataX type strings until you find the entries where they are associated with a particular disks serial number to identify the correct drive(s).
    1 point
  12. From the release notes it looks like they have spent a lot of time implementing positive features and listened to the community closely. Performance seems better from just a quick test too, so that is very positive for me I just updated the GAME ID field to be "294420 -beta latest_experimental". This downloads the correct server version, but i haven't yet been able to get it to run. Still tinkering...
    1 point
  13. You can setup recycle bin with the use of the plugin and some scripting, but its just allows you to hold deleted files for a set time period rather than deleting them when you throw them in the trash. I think you'll need to set up media management on your download client. just a guess though.
    1 point
  14. Probably doesn't matter, except it may interfere with stopping the array. Personally I'd try stopping the array and see what happens. If it won't stop, look at the process list in the console and kill any dd processes.
    1 point
  15. It would probably be better to simply remove the unneeded drives and rebuild parity than to struggle with this method. Are you afraid one or more of your active drives that you plan to keep is failing, and need to keep parity valid in case it dies?
    1 point
  16. "Keep it in the family" makes sense. Not sure how I'm going to proceed. Thank you for all your help and insight!
    1 point
  17. After looking at your syslog, it appears it is correcting the same sectors. Bad memory will typically result in random sectors being corrected since the specific memory accessed during disk reading will not be deterministic. I suspect a controller issue or an actual issue with one or more disks causing the same sectors to return bad data. I didn't notice any issues with the SMART reports on any of the array disks. You might try an Extended SMART test for each of them. Click on a disk to get to its page to get to the Self Test.
    1 point
  18. Hi, not sure if this is the right place. I installed the container "nut influxdb exporter" and it points to this post as the support post. Anyway I'm trying to use it to bring in my UPS stats and have run into a problem. My UPS is a Cyberpower OR2200PFCRT2Ua. When I install the container if I delete the entry for WATTS the container works but the data is incorrect, it shows a usage of only about 44W, when the UPS front panel indicates the load is actually 172W. In NUT I have to configure the setup as: UPS Power and Load Display Settings: Manual UPS Output Volt Amp Capacity (VA): 2200 UPS Output Watt Capacity (Watts): 1320 If I do this then in Unraid, all the UPS information is displayed correctly on the dashboard. However if I enter 1320 into the WATTS entry of the container it instantly stops after starting and displays the following error message: [DEBUG] Connecting to host Connected successfully to NUT [DEBUG] list_vars called... Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 107, in <module> json_body = construct_object(ups_data, remove_keys, tag_keys) File "/src/nut-influxdb-exporter.py", line 85, in construct_object fields['watts'] = watts * 0.01 * fields['ups.load'] TypeError: can't multiply sequence by non-int of type 'float' So to get accurate data I need to enter the WATTS info but then the container doesn't like it. If I omit the Watts info the container runs but reports the wrong info. Any help is appreciated and sorry if this is perhaps the wrong thread... *EDIT* As an aside I did some digging, my UPS is reporting ups.load as 14. If I do the math in the last line watts(1320) * .01 * ups.load (14). I get 184.8W. The front panel is reporting 185W currently. So the math is right, it just appears that maybe one of the entries isn't seen as an actual number for some reason..
    1 point
  19. Have you actually visited that link? It tells you if you are not properly connected through privoxy, and gives instructions on how to fix. I think that link is the ideal thing to put there.
    1 point
  20. I would next suggest that you go to the support thread for the ownCloud Docker and ask there about this problem there. I did a quick look and apparently there are two ownCloud based Dockers. Please understand that I was talking about Shares or directories (folders in the Windows world), not files (although the same thing would happen with files). In the Linux world, you can have a directory named Test and another one named test and everything would be fine. You would have two directories with different names and you can access either one. In the Windows world, you could not even create the second one as Windows would prevent it. However, when a Windows computer is connected (via SMB) to that Linux computer, it would only recognize one of these two directories (the first one it encounters as it scans the names) and totally ignores the other one. The second one has totally 'disappeared' from the Windows world. This problem often happens when Linux processes (Dockers/Plugins) are creating directories for use by Windows users... I didn't really look at what Docker you were using (assuming the simple solution laid in the the above mentioned issue). Having taken a deeper look at what I think Owncloud does, it might be that you have lost access to the Cloud environment (or portions of it).
    1 point
  21. Yes. Unmanic will just copy the stereo AAC streams from the source file to the destination file without re-encoding. Your subtitle issue is likely due to the subtitle codec of the source being image based. Unmanic can only handle text based subtitles at this time. Image based ones will cause an error with the FFMPEG command. I have created a feature request on GitHub for modifying the application's behaviour in dealing with these image based subtitle codecs: https://github.com/Josh5/unmanic/issues/74
    1 point
  22. Let's start with a possible simple solution. Open up the Terminal Window in the GUI (the -> icon on the Toolbar) and enter the following command: ls -al /mnt/user Look and see if you have two shares that have the same name EXCEPT for the capitalization. If it is a folder, you will have to add share name and folder names to the command line until you get to folder in question. For my share named Media I would being doing this: ls -al /mnt/user/Media Note that capitalization is important in Linux! If you find two shares (or folders) with the same name, the files in one of them will have to copied/moved to the proper one. Then delete the empty one. You will also have to 'fix' the configuration for the application that is misbehaving. If this is not the problem, post up a Diagnostics file with your next post. Tools >>>> Diagnostics
    1 point
  23. You can do what you are asking. Suggest you start out by preclearing the new 6T drive. Not that it needs to be cleared, but this is a good test. Not absolutely necessary, but I would definitely do it. You would install the 6TB into the array, and install as disk1 (only disk in "array"). unRAID would allow you to format, which you should do. You could then copy all of your data from the Drobo to the unRAID disk1. You could then move your drobo disks over to the new array. Stop array, installing as disks2-diskn. Then you could format the 3T (former Drobo) disks (you are now 100% dependent on the 6TB drive's data) You can then copy 1/2 the files to disk2 and 1/2 to disk3 (or however you want to split them up). When done, you can do a new config. Assign the 6T drive to the parity slot, and the 3T drives to the disk1-diskn slots. When you start the array it will build parity. It is not the safest procedure. You might want to run md5 checksums on the disks on Drobo and compare to the 6T drive. I might suggest by 2 8T drives (they can be had for $189 ea.). Install one as parity, one as data. Copy the Drobo files over. Keep the Drobo files as backup - at least until you get to know unRAID, burn in your server, and make sure all is working well. It is good to have backups, so I might suggest keeping it that way. But if the data is not critical or backed up elsewhere, you can move your Drobo drives into the unRAID server, where they can be precleared and added as empty drives. That would be my recommendation.
    1 point
  24. 6.3.2 still includes the powerdown script. Its been deprecated as in that its no longer the original script as developed by dlandon. What it merely does is call poweroff / reboot #!/bin/bash logger "/usr/local/sbin/powerdown has been deprecated" if [[ "$1" == "-r" ]]; then /sbin/reboot else /sbin/poweroff fi Since I've been typing powerdown and powerdown -r for years now, muscle memory is burned into me to only use that (especially since previously poweroff / reboot were not graceful at all in unRaid. TBQH, I'm going to keep posting the powerdown commands if / when I help someone as its what I'm used to. (and if LT ever actually deletes the powerdown script, then I'm going to add it back in - Just too used to it to change.
    1 point