Leaderboard

Popular Content

Showing content with the highest reputation on 08/03/18 in all areas

  1. If you can't move your server without getting into troubles, then it isn't a software problem but a hardware problem. unRAID doesn't have any magical software vibration sensors to make it know it is expected to give you troubles after a computer move. Emulation isn't "let's recreate the files on the disk". Emulation is "let's recreate the binary content of the disk". So if something is wrong with the file system of the drive, and you then rebuild to a new disk, you'll end up with a new disk that also has something wrong with the file system. Parity always operates on raw disk blocks. So not a work-around for broken file systems, deleted or overwritten files, accidentally formatted disks etc. If the file system on a disk is broken, then rebuilding to a new disk will not fix the issue. You need to run file system repair software. The difference between unRAID and a traditional RAID installation is that in a traditional RAID there is only one file system. So if the file system breaks, then every single file goes offline. Your original task shouldn't have been to add a second parity disk, but to figure out exactly why you have been losing data. If it's a hardware issue, then you can't solve it by adding additional parity disks - you need to eliminate the hardware problem. If it's a user error, then you obviously need to identify what you do wrong and stop doing it. The forum can help you figuring out why you are having problems. But that requires that you supply enough information - and aren't so quick to perform different actions. Incorrect configuration or incorrect actions are the most common reason why people lose data, whatever RAID system they use. With unRAID, it's most often people not turning on notifications. Or people formatting drives or rebuilding parity and thereby destroying important data. Right now, we don't know why you decided to rebuild drive 13. All we know is that the file system is broken from the following log: Aug 2 21:46:24 Tower emhttpd: shcmd (282): mount -t xfs -o noatime,nodiratime /dev/md13 /mnt/disk13 Aug 2 21:46:24 Tower kernel: XFS (md13): Metadata CRC error detected at xfs_sb_read_verify+0xe5/0xed [xfs], xfs_sb block 0xffffffffffffffff Aug 2 21:46:24 Tower kernel: XFS (md13): Unmount and run xfs_repair Aug 2 21:46:24 Tower kernel: XFS (md13): First 64 bytes of corrupted metadata buffer: Aug 2 21:46:24 Tower kernel: ffff88023b3c1000: 58 46 53 42 00 00 10 00 00 00 00 00 74 70 25 49 XFSB........tp%I Aug 2 21:46:24 Tower kernel: ffff88023b3c1010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Aug 2 21:46:24 Tower kernel: ffff88023b3c1020: 97 da e6 c7 1c 83 46 9f 92 ad 3d 9f a7 d7 85 4f ......F...=....O Aug 2 21:46:24 Tower kernel: ffff88023b3c1030: 00 00 00 00 10 00 00 05 00 00 00 00 00 00 00 60 ...............` Aug 2 21:46:24 Tower kernel: XFS (md13): SB validate failed with error -74. Aug 2 21:46:24 Tower root: mount: /mnt/disk13: mount(2) system call failed: Structure needs cleaning. We also know that: Your cache - sdh - have had 45 transfer errors. Probably because of issues with cables. Disk 2 (sdb) have - 2344 reallocate sectors Disk 3 (sdd) have - 64 reallocated sectors Disk 5 (sdq) doesn't seem to store SMART data even if it seem to be identical to disk 10 (sdn), and doesn't claim SMART is disabled. Disk 6 (sdg) have - 16 UDMA CRC
    2 points
  2. Hello all, I wanted to put this request out to the wonderful developers here in our community. I have toyed from time to time with the idea of developing or contributing to plugin development for unRAID. Every time I consider it however i end up put off by the lack of clear concise documentation. I am a programmer by trade, but I am a noob when it comes to many if the languages and concepts used in the unRAID plugin system. With that said i would like to see an example plugin that clearly demonstration how a plugin should be structured and how to do the basic things a plugin does, hopefully will judicious commenting. I envision something that covers most of the basics like, adding a settings page, adding a page/tab to the main interface, modifying or inserting something into an existing page (if possible), including a binary package, requiring a binary package from nerdpack (that is a thing right?), running a script on command (button press), registering a script to run on a particular system event (array startup?), persisting settings across reboot, adding help text, interacting with another plugin or system plugin (like dockerman). I also would really like to see a detailed readme with requirements and step by step instructions for building and packaging the plugin for distribution. I know this is a lot to ask but i think in the long run it could be of great benefit to our community. I myself would be extremely grateful, and would be willing to eventually repay the community in development contributions.
    1 point
  3. You could try dumping the vbios, or modifying one from tech powerup and passing that to the vm. https://www.techpowerup.com/vgabios/?architecture=NVIDIA&manufacturer=NVIDIA&model=Quadro+4000&interface=&memType=&memSize=&since= https://www.youtube.com/watch?v=1IP-h9IKof0 but also
    1 point
  4. Pinging 8.8.8.8 for internet alive, I don't think is the best practice. Maybe you should ping the first DNS server, that is defined, if it's external, or resolve an internet domain using 2 different DNS servers, from different providers...
    1 point
  5. I love it, I'm in the UK, so paid about £156 shipped from China for it (64GB version) feels premium in the hand, monthly Android updates (last months to Android v8.1) Camera performs pretty well especially after enabling the google camera api. Would definitely recommend it and would definitely consider another Chinese phone.
    1 point
  6. If you want to keep the result of the scan to consult it later, you need to save it using the "Write to cache file" function. I also experiment some slowdown in the UI when scanning. Reloading the web page as no effect on the application, since you are just accessing the app'UI via VNC. The file manager is not supported in this container. However, you can use the "Open Terminal Here" function.
    1 point
  7. I realize that it is a moving target, and that is the reason for lack of documentation, which is why i thought this might be a slightly easier way of providing a starting point. At least with an example plugin it is immediately obvious that it no longer compiles/runs when something changes. And unlike forum posts its changes are version controlled. As for reverse engineering I have tried that in the past and not gotten very far. That is partly the motivation for an example plugin. When reverse engineering a plugin you need to be able to figure out what is being done and how it is being done. The idea of an example plugin is that the what is being done part is very clearly spelled out, making it easier to understand the how. I would be more than happy to assist with ongoing maintenance once i understand the architecture.
    1 point
  8. Your understanding of parity is correct, but your understanding of what a file is may be flawed. Think of it this way. Your hard drive contains what looks like random strings of 1's and 0's. It's only by using the table of contents that you can be sure of what section of data belongs to what file. That proportionally small piece of data where the table of contents resides is in a common "column" of addresses, so if the table of contents is modified on any of the drives, you will be corrupting the TOC for your parity emulated drive. No valid TOC, no files, at least not without forensic recovery tools. When you delete a file or move a file or anything like that, it updates the TOC for the drive. So, yeah, it's pretty easy to scrap the whole thing. Sometimes you can get the filesystem repair tools to rebuild the TOC, sometimes it's just too far gone. Building parity doesn't write to the data drives, but there are various background processes that can access and write to the drives. I was hoping when you said you were content to wait, that you were doing just that, waiting, not continuing to use the drives you wanted to recover with.
    1 point
  9. All dockers as per the image - so that they are all referencing the same mapping which is important to have good comms between dockers. Within each docker I added the relevant sub-folders e.g. here's one of my plex libraries:
    1 point
  10. I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings, and then within sonarr etc added the relevant folders e.g. /unionfs/google_vfs/tv_kids_gd and /unionfs/google_vfs/tv_adults_gd and in radarr /unionfs/google_vfs/movies_kids_gd , /unionfs/local_media/movies_hd/kids etc etc
    1 point
  11. have you got any apps other than plex looking at your mounts e.g. kodi, or maybe one of your dockers is not configured correctly and is mapped directly to the vfs mount rather than then unionfs folder
    1 point
  12. Some thermal paste degrades a lot over time, but I think most thermal paste works quite well for the expected lifetime of the system - many industrial systems are expected to work well for 10-20 years without need for replacing any thermal paste. It's more likely that there was an alignment issue or that not all of the chip had thermal paste. Either of these could result in a big variation in temperature between different parts of the chip potentially making one part hot enough that it becomes unstable while the temperature sensor still sees a temperature that does not require throttling.
    1 point
  13. Edit: 08/10/2018 - Updated rclone mount, upload script, uninstall script Edit: 11/10/2018 - Tidied up and updated scripts Sharing below what I've got in case it helps anyone else. I use the rclone plugin, custom user scripts plugin and unionfs via Nerd Pack to make everything below work. docker mapping: For my dockers I create two mappings /user ---> /mnt/user and /disks --> /mnt/disks (RW slave) rclone vfs mount :- /mnt/user/mount_rclone/google_vfs So, my rclone mount below is referenced within dockers at /user/mount_rclone/google_vfs I don't think it's safe in the top-level folder and also created google_vfs folder in case I do other mounts in the future rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m & Local Files awaiting upload: /mnt/user/rclone_upload/google_vfs A seperate script uploads these to gdrive on my preferred schedule using rclone move unionfs mount: - /mnt/user/mount_unionfs/google_vfs Unionfs to combine gdrive files with local files that haven't been uploaded yet. My unionfs mount below is referenced within dockers at /user/mount_unionfs/google_vfs. All my dockers (Plex, radarr, sonarr etc) look at the movie and tv_shows sub-folders within this mount, which masks whether files are local or in the cloud: unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs My full scripts below which I've annotated a bit. Rclone install I run this every 5 mins so it remounts automatically (hopefully) if there's a problem. #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/mount_rclone/rclone_install_running fi ####### End Check if script already running ########## mkdir -p /mnt/user/mount_rclone/google_vfs mkdir -p /mnt/user/mount_unionfs/google_vfs ####### Start rclone_vfs mounted ########## if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success." else echo "$(date "+%d.%m.%Y %T") INFO: installing and mounting rclone." # install via script as no connectivity at unraid boot /usr/local/sbin/plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m & # pausing briefly to give mount time to initialise sleep 5 if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems." rm /mnt/user/mount_rclone/rclone_install_running exit fi fi ####### End rclone_vfs mount ########## ####### Start Mount unionfs ########## if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else # Unmount before remounting fusermount -uz /mnt/user/mount_unionfs/google_vfs unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed." rm /mnt/user/mount_rclone/rclone_install_running exit fi fi ############### starting dockers that need unionfs mount or connectivity ###################### # only start dockers once if [[ -f "/mnt/user/mount_rclone/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started" else touch /mnt/user/mount_rclone/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start plex docker start letsencrypt docker start ombi docker start tautulli docker start radarr docker start sonarr docker start radarr-uhd docker start lidarr docker start lazylibrarian-calibre fi ############### end dockers that need unionfs mount or connectivity ###################### ####### End Mount unionfs ########## rm /mnt/user/mount_rclone/rclone_install_running exit rclone uninstall run at array shutdown Edit 08/10/18: also run at array start just in case unclean shutdown #!/bin/bash fusermount -uz /mnt/user/mount_rclone/google_vfs fusermount -uz /mnt/user/mount_unionfs/google_vfs plugin remove rclone.plg rm -rf /tmp/rclone if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then echo "install running - removing dummy file" rm /mnt/user/mount_rclone/rclone_install_running else echo "Passed: install already exited properly" fi if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then echo "upload running - removing dummy file" rm /mnt/user/mount_rclone/rclone_upload else echo "rclone upload already exited properly" fi if [[ -f "/mnt/user/mount_rclone/rclone_backup_running" ]]; then echo "backup running - removing dummy file" rm /mnt/user/mount_rclone/rclone_backup_running else echo "backup already exited properly" fi if [[ -f "/mnt/user/mount_rclone/dockers_started" ]]; then echo "removing dummy file docke run once file" rm /mnt/user/mount_rclone/dockers_started else echo "docker run once already removed" fi exit rclone upload I run every hour Edit 08/10/18: (i) exclude .unionfs/ folder from upload (ii) I also run against my cache first to try and stop files going to the array aka 'google mover'. I also make it cycle through one array disk at a time to stop multiple disks spinning up for the 4 transfers and to increase the odds of the uploader moving files off the cache before the mover moves them to the array #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else touch /mnt/user/mount_rclone/rclone_upload fi ####### End Check if script already running ########## ####### check if rclone installed ########## if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/mount_rclone/rclone_upload exit fi ####### end check if rclone installed ########## # move files # echo "$(date "+%d.%m.%Y %T") INFO: Uploading cache then array." # echo "$(date "+%d.%m.%Y %T") INFO: Temp Clearing each disk" rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk1/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk2/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk3/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk4/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk5/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk6/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 # end clearing each disk # remove dummy file rm /mnt/user/mount_rclone/rclone_upload exit unionfs cleanup: Daily and manually. I don't run from dockers anymore as it was running too often and overkill #!/bin/bash ################### Clean-up UnionFS Folder ######################### echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup." find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs} newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete ########### Remove empty upload folders ################## echo "$(date "+%d.%m.%Y %T") INFO: removing empty folders." find /mnt/user/rclone_upload/google_vfs -empty -type d -delete # replace key folders in case deleted so future mounts don't fail mkdir -p /mnt/user/rclone_upload/google_vfs/movies_adults_gd/ mkdir -p /mnt/user/rclone_upload/google_vfs/movies_kids_gd/ mkdir -p /mnt/user/rclone_upload/google_vfs/movies_uhd_gd/adults/ mkdir -p /mnt/user/rclone_upload/google_vfs/movies_uhd_gd/kids/ mkdir -p /mnt/user/rclone_upload/google_vfs/tv_adults_gd/ mkdir -p /mnt/user/rclone_upload/google_vfs/tv_kids_gd/ ###################### Cleanup import folders ################# echo "$(date "+%d.%m.%Y %T") INFO: cleaning usenet import folders." find /mnt/user/mount_unionfs/import_usenet/ -empty -type d -delete mkdir -p /mnt/user/mount_unionfs/import_usenet/movies mkdir -p /mnt/user/mount_unionfs/import_usenet/movies_uhd mkdir -p /mnt/user/mount_unionfs/import_usenet/tv exit
    1 point
  14. For those interested. I created a new plugin "Dynamix Day and Night". This plugin automatically toggles between a day theme and a night theme, based on the sunrise and sunset times of your location. Personally I like a bright theme during the day and a dimmed theme during the night, but it is up to the user to select which theme is used during the day or night. See the OT for the URL and manual installation, or wait until it appears under CA. Enjoy.
    1 point
  15. SSH into the server or use the console and type: mover stop
    1 point