jamesac2

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jamesac2's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I've managed to work out how to fix this, as I was using beets and do everything in a shell window, I could just ls to see the files then I ran a move on the root of the folder and it moved the files back to where they were
  2. Hi, made a little error in beets and left a / in front of the file path and its moved a large amount of my music into the docker.img file. I've expanded the img file but I can't seem to find a way of moving the files back again. I don't know if I can see them or they would be readable. any ideas how I could achieve this so I could put my music back where it was. thanks in advance
  3. I have 64gb of ecc ddr4 memory so will change this but I doubt I’ve run out of memory I will give it a go but is the buffer not to stop the playback issues?
  4. im hoping im not hijacking a thread here but i am after some help with rclone and plex in my unraid box. i have created a VPS to re download a lot of my shows as this has a 100/100 unlimited data transfer for cheap. however ive followed spaceinvader ones guide to mounting the storage which works perfectly up until a point, When plex is playing a file, if i go to rewind it or forward it, it just crashes plex and in turn locks docker up completely the only way for me to resolve this is to restart my server which is a pain as i run pfsense on it. this has happened a few times, its noticeable when a program is around 75% through, (its happened to my wife on both occasions so ive got to fix this!!!) my internet connection is not great but i have 40/10 hence my need for a VPS (quicker than moving my content to gdrive via upload) i am using Rclone Beta on my unraid 6.7,2 but this has happened on 6.5.3 which is what i was on before upgrade. playing files is ok and i have only had a streaming / buffering issue from plex twice when playing back a 1080 tv show that is around 5gb for 45 mins, this is my mount options for rclone which i have a userscript to mount manually (when i power down the server i have to uninstall the plugin and reinstall when it boots which is a pain) mkdir -p $mntpoint rclone mount --allow-other --buffer-size 1G $remoteshare $mntpoint & my remote is mounted in disks/cloud and i have read write access for tidying up files and moving the odd file up to gdrive from my server via dolphin. i have moved a lot of my mechanical disks out of my server and gone pure SSD & M2 storage for files i want quick access to but the films and tv can be considered cold storage and it wouldnt be the end of the world if i lost them (liike what happened when i had amazn drive a couple of years back) im using google drive now, ive seen a lot of options tailored towards cache drives etc but im not sure if thats what i need, i am only playing media to three clients in my house and most would be two at a time. im not sure what would be helpful to resolve this rewind forward issue, i dont mind waiting a couple of second for a program to start as long as once its there, it doesnt crash my system.
  5. Hi all ive swapped out all of my smaller drives for Solid state (mostly NVMe but some Msata cards) and i currently have them running with my spinning hard drives. im taking out all the spinning hard disks as ive moved a lot of the data thats replaceable into the cloud and is mounted via rclone, question is, at the moment i have 2 nvme drives as cache and the rest are part of the array, i dont use parity drives so ui dont think it matters about the trim function ive read about, im thinking apart from being able to rip the disks out and read them in most linux distros, is there an advantage to having all my disks in the cache, i have 7 2tb drives, 2 x 1tb drives and 2 x 512gb drives, i currently use the 2 512gb drives in raid 0 and the important data on the cache is backed up every week. people may say im missing the point of unraid but ive had the licence a few years, im happy wiht the simplicity of how it works and docker and vm setup and usage so i dont really want to move away from it. i do have a couple of smaller SSDs i could use to add as 1 data drive as i understand it wont run with just cache disks in it. is this still the case? thanks everyone in advance
  6. is this now dead/defunct? i cannot get the docker container template to install, this was great when it was working but ive had to fall back on hosting this on my mac until i can get it back up and running.
  7. Hi i am looking to see if there is a docker app or plugin that would do this i have decided to upgrade my cache storage and i now am going to use this for some other folders as well as VMs and appdata etc, this is pcie nvme raid 0 so is fast in comparison to the array of spinning disks, i would like to have a duplicate version on my array that "watches" for changes on the folders on the cache drive and sync them (copy not move) to what i would consider a backup on the main array (as i am aware of how easy it is to take a disk out of the arrray, launch ubuntu and access the disk) a long time ago i used to use freefilesync for this but since i moved to unraid ive not found anything that would do this (only rsync which runs on a schedule, i would prefer a real time sync of changes. (so not to worry if last nights sync happened. an example is photos are in cache/photos - i want a folder in my array called user/photobackup when i make a change to something in the cache/photos, it will replicate that change in user/photobackup im sure this is simple to set up!!
  8. Hi Everyone, been an unraid user for a few years now, looking to upgrade my setup. i currently do not use the parity function on unraid as i have important data always backed up elsewhere, however i have previously had my setup with a parity drive, i understand the reasoning behind why the array slows down when parity disks are present and i would consider getting a disk for cache, however my new setup will exist of main storage 10 x 10tb disks which spin down after 30 mins or so, downloads, current CCTV, appdata, vms, docker stuff, scratch etc and will be split on a couple of 1tb NMVe drives mounted via PCIe bus. this is always available i have had a lot of issues using unassigned devices with shares etc but i dont want to add these NVMe drives to the array. is it possible to have multiple arrays in unraid? i am happy to add the NVMe disks as "cache" drives but i dont want them raided via BTRFS, ive done this before with 4 SSDs and one died and i could not get my cache back and ended up having to clean build and restore, the data thats importatn to me will be backed up to cold storage along with the appdata backups etc. I use dolphin docker a lot to move files etc so the speed on the network is not considered here and all of my endpoints are only 1gb anyway, so i think what im asking for is a better way to manage the fast everyday storage that doesnt get slowed down. Do i use cache and if so what would be the best way to be able to see this as seperate disks etc