-
Posts
192 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by J.Nerdy
-
-
Hello all
Morning self test returned with a warning of Disk Read Errors (168) on parity drive. Short SMART passes. I am sure this is just a precursor to disk failure (I have another on standby).
I have attached the diagnostic and SMART results, and, would like to learn what to look for in these reports. Any assistance on moving forward would be appreciated.
-
1 minute ago, johnnie.black said:
Trim doesn't work on SAS2008 controllers (9201, 9210, 9211, etc), it does work on SAS2308 controllers like the 9207.
Cheers JB.
-
I didn't even think about TRIM...cache is constantly be written/erased/overwritten ... . Since they are on native ports, I probably should check my routines.
Thanks for the maths on the 9207, makes the decision process significantly easier. Seems like the no-brainer way to go (especially for just pure storage / vs overwrites)
Cheers!
-
I have populated all of my SATA and M2 slots with drives for storage, parity and cache.
I would like to continue to add a couple more 4TB reds I have, will an LSI 9207-8i provide adequate bandwidth (full 6gbps)?
Are there other cards I should consider? If I do migrate away from spinners...(cache drives are SSD) is there a better option that balances present needs with future growth?
The more I read, the more indecisive I become...so any experience based advice would be appreciated.
-
Excellent advice.
Thank you!
-
Current config:
4 x 4TB array
1 x 4TB parity
2 x 480gb SSD cache pool
42% of Array used.
- Media Files
- Disc Backups/images
- Documents
Crashplan to cloud and external backups
I have another 4TB disk and was planning on using it for added fault tolerance as a parity2. Given the small number of disks, redundancy in both cloud and physical back-ups and near 50% capacity...would it be wiser to increase capacity? Is a dual disk failure during a rebuild, with "only" five data disks a remote enough possibility for the 20% increase of capacity.
Thanks, and happy labor day to all.
-
Since the front end is just "re-themed", are there any improvements to backup engine in PRO?
It took a healthy amount of time to back-up my 6.3TBs
-
2 hours ago, Lev said:
I did some quick research, here's the facts I found that told me all I needed to know to form an opinion.
-- The screenshots for the GUI of CrashPlan's Small Business app look different than this one we currently use as Home users.
-- The pricing is $10/month, which is roughly double what we pay now as Home users for 12 months
-- Small business won't have all the same features we presently have.
I'll save the effort of expressing my opinion and use it more constructively on finding an alternative during the next year, as I had recently renewed for another year prior to this announcement.
@gfjardim thank you for your contribution to providing and supporting this container. It's been solid and proved it's value over the last 14 months I've used it. If it continues to be supported until it's final end a year from now, that's 2+ years to be proud of. Cheers!
Its double... per device. That's the part that rubs me wrong. I had 6 devices backing up on home (including my server)... that would take me from $129/yr to $720/yr...
YIKES
-
Well isn't this a groin kick...
My 6TB+ back-up just finished two days ago.
That said, CP for small business does do unlimited storage, and, local backups. I think the most frustrating part is having to start a massive data backup from scratch. I will be buying an external drive for local backup, and, hope that it serves as a decent level of redundancy while I begin the process anew, closer to when my subscription ends.
Hopefully a docker will be available by then.
-
I have done exactly this... and they still are producing errors in the log.
Removed, and, reinstalled.
-
experiencing the same:
Feb 9 22:36:47 nerdyRAID root: error: plugins/preclear.disk/Preclear.php: wrong csrf_token Feb 9 22:36:49 nerdyRAID kernel: BTRFS info (device sdc1): found 13528 extents Feb 9 22:36:49 nerdyRAID kernel: BTRFS info (device sdc1): relocating block group 2168455168 flags 1 Feb 9 22:36:49 nerdyRAID kernel: BTRFS info (device sdc1): relocating block group 1094713344 flags 20 Feb 9 22:36:51 nerdyRAID kernel: BTRFS info (device sdc1): found 5438 extents Feb 9 22:36:53 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:36:58 nerdyRAID root: error: plugins/preclear.disk/Preclear.php: wrong csrf_token Feb 9 22:36:59 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:37:05 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:37:09 nerdyRAID root: error: plugins/preclear.disk/Preclear.php: wrong csrf_token Feb 9 22:37:11 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:37:17 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:37:20 nerdyRAID root: error: plugins/preclear.disk/Preclear.php: wrong csrf_token Feb 9 22:37:23 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:37:29 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Feb 9 22:37:31 nerdyRAID root: error: plugins/preclear.disk/Preclear.php: wrong csrf_token Feb 9 22:37:35 nerdyRAID root: error: plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
Also, it seems my "MAIN" tab, is refreshing every 5-10 seconds
-
Yes.
docker exec -it naneofcontainer bash
https://docs.docker.com/engine/reference/commandline/exec/
Thank you kindly!
Worked a charm.
-
Questions:
Is it possible to access the PMS device.xml profiles inside of the docker? These are not stored on /appdata ... so I imagine the resources are inside the container. Can one ssh into the container?
I want to edit chromecast.xml to add hevc so my ULTRA direct plays, instead of transcodes my 4k media ...you can imagine in PINS my fx8350 and is unwatchable
Seeing lots of success stories of hardware acceleration on discrete solutions... which is nice. However, while KVM gpu passthrough is a reality... will docker based passthrough be something tackled in the future?
-
Can unRAID pass through NVIDIA drivers to a container?
-
Anyone has the ability to issue pull requests on this file: https://github.com/Squidly271/Community-Applications-Moderators/blob/master/Moderation.json for my review. Or to drop me a PM about any non-functioning container.Also running into the 2 account issue. Tried restarting/reinstalling a couple of times but no luck.
I'm a long time unRAID user but new to v6 & Dockers... this was my first attempt at running a docker. Is there a way to rate or flag dockers as non-functional so they don't appear in the CA list?
My limited understanding about GMM was that it does function, albeit if IPs / Users change then it goes downhill
Hey Squid... MAC address generation is causing invalid registration states.
-
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="plex" --net="host" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -e "VERSION"="latest" -v "/mnt/user/MOVIES/":"/media/Movies":rw -v "/mnt/user/TV/":"/media/TV":rw -v "/mnt/user/PHOTOS/":"/media/Photos":rw -v "/mnt/user/MUSIC/":"/media/Music":rw -v "/mnt/user/appdata/containers/plex":"/config":rw linuxserver/plex 808df8dfa62c9755dd4b42e7b508fc306863ccceba985b5969dc2b7bacc2a076
if I use the shell inside of the container...
/config/Library/~
matches
/mnt/user/appdata/containers/Plex/Library/~
This SQLite error is in the PMS logs
Jan 07, 2017 20:23:56.513 [0x2b4a40801700] INFO - Plex Media Server v1.3.3.3148-b38628e - ubuntu PC x86_64 - build: linux-ubuntu-x86_64 ubuntu - GMT -05:00 Jan 07, 2017 20:23:56.513 [0x2b4a40801700] INFO - Linux version: 4.4.30-unRAID (#2 SMP PREEMPT Sat Nov 5 12:09:05 PDT 2016), language: C Jan 07, 2017 20:23:56.513 [0x2b4a40801700] INFO - Processor AMD FX(tm)-8350 Eight-Core Processor Jan 07, 2017 20:23:56.513 [0x2b4a40801700] INFO - ./Plex Media Server Jan 07, 2017 20:23:56.511 [0x2b4a3507e7c0] DEBUG - BPQ: [idle] -> [starting] Jan 07, 2017 20:23:56.512 [0x2b4a3507e7c0] DEBUG - Opening 20 database sessions to library (com.plexapp.plugins.library), SQLite 3.13.0, threadsafe=1 Jan 07, 2017 20:23:56.513 [0x2b4a3507e7c0] ERROR - SQLITE3:0x10, 5386, os_unix.c:33579: (19) mmap(/config/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-shm) - No such device Jan 07, 2017 20:23:56.513 [0x2b4a3507e7c0] ERROR - SQLITE3:0x10, 5386, disk I/O error
and this:
Jan 07, 2017 20:23:50.204 [0x2aee05001700] INFO - Plex Media Server v1.3.3.3148-b38628e - ubuntu PC x86_64 - build: linux-ubuntu-x86_64 ubuntu - GMT -05:00Jan 07, 2017 20:23:50.204 [0x2aee05001700] INFO - Linux version: 4.4.30-unRAID (#2 SMP PREEMPT Sat Nov 5 12:09:05 PDT 2016), language: C
Jan 07, 2017 20:23:50.204 [0x2aee05001700] INFO - Processor AMD FX-8350 Eight-Core Processor
Jan 07, 2017 20:23:50.204 [0x2aee05001700] INFO - ./Plex Media Server
Jan 07, 2017 20:23:50.201 [0x2aedf98c67c0] DEBUG - BPQ: [idle] -> [starting]
Jan 07, 2017 20:23:50.202 [0x2aedf98c67c0] DEBUG - Opening 20 database sessions to library (com.plexapp.plugins.library), SQLite 3.13.0, threadsafe=1
Jan 07, 2017 20:23:50.204 [0x2aedf98c67c0] ERROR - SQLITE3:0x10, 5386, os_unix.c:33579: (19) mmap(/config/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db-shm) - No such device
Jan 07, 2017 20:23:50.204 [0x2aedf98c67c0] ERROR - SQLITE3:0x10, 5386, disk I/O error
Jan 07, 2017 20:23:50.204 [0x2aedf98c67c0] ERROR - Database corruption: sqlite3_statement_backend::prepare: disk I/O error for SQL: PRAGMA cache_size=2000
Jan 07, 2017 20:23:50.205 [0x2aedf98c67c0] ERROR - Error: Unable to set up server: sqlite3_statement_backend::prepare: disk I/O error for SQL: PRAGMA cache_size=2000 (N4soci10soci_errorE)
CHBMB got me sorted
/mnt/user needed to be /mnt/cache
-
Ok, Thanks.
Stopped docker.
Disabled
Deleted Docker image
Stopped Arrays
Rebooted.
Will restart the array and start from scratch... when I first tried to install PMS I encountered this...do not remember how I fixed. It just seems like the container can not find the config internally... even though paths are pointed correctly
Here is the log at start-up
[s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-plex-update: executing... ##################################################### # Login via the webui at http://<ip>:32400/web # # and restart the docker, because there was no # # preference file found, possibly first startup. # ##################################################### [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting dbus-daemon Starting Plex Media Server. [services.d] done. 6 3000 /config/Library/Application Support d dbus[269]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted Starting Avahi daemon Found user 'avahi' (UID 106) and group 'avahi' (GID 107). Successfully dropped root privileges. avahi-daemon 0.6.32-rc starting up. No service file found in /etc/avahi/services. *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** socket() failed: Address family not supported by protocol Failed to create IPv6 socket, proceeding in IPv4 only mode socket() failed: Address family not supported by protocol Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. New relevant interface docker0.IPv4 for mDNS. Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. New relevant interface virbr0.IPv4 for mDNS. Joining mDNS multicast group on interface br0.IPv4 with address 10.11.12.200. New relevant interface br0.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.122.1 on virbr0.IPv4. Registering new address record for 10.11.12.200 on br0.IPv4. Server startup complete. Host name is nerdyRAID.local. Local service cookie is 1685075968. Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Got SIGTERM, quitting. Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. Leaving mDNS multicast group on interface br0.IPv4 with address 10.11.12.200. avahi-daemon 0.6.32-rc exiting. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.
-
Dunno... I actually just set it up again myself from scratch with no problems
Are you just using a template... or going to CA, searching plex... click add and going through the process that way?
-
It has to be something I am doing.
It was working fine, then I tried to tweak it to get plexpy to work.
I am so new to unix, so I have little to offer by experience. It looks like it is calling to the daemon expecting a certain file @ /config/Library/Application Support and not finding it.
-
On NEW setups, using unRaid 6.2RC4+, it doesn't matter if its pointed to /mnt/user/appdata or /mnt/cache/appdataWhy would pointing /config to /mnt/cache/appdata/Plex cause the container not to load?
Prior to 6.2RC4, it really only worked properly if set to /mnt/cache/appdata (this is due to hardlink support)
On already existing appdata, then switching from /mnt/cache/appdata to /mnt/user/appdata may have strange results.
Yeah, I borked mine. Now I Keep getting a recursive error .... with Server looking for /mnt/user/Library/Application Support
Best way to learn...is to break it.
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="plex" --net="host" -e TZ="America/New_York" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -e "VERSION"="latest" -v "/mnt/user/appdata/plex":"/config":rw linuxserver/plex 178a27bec089cb1e6e89f665b6072e3484b24c1f5d2cb34ad9ad479b50b60540 The command finished successfully!
results in:
d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support d Starting Plex Media Server. 6 3000 /config/Library/Application Support
I have deleted container, data and docker image...
Maybe a permissions issue?
-
Why would pointing /config to /mnt/cache/appdata/Plex cause the container not to load?
-
I've been fighting timemachine for over a year now, sometimes it works but mostly not. Usually only after i try a 1000 times to select the disk using one of many credentials, and then suddenly it finds the unraid disk and starts backup. Lately it was backing up for 90% and then stopped with some *&^&*&* message ' disk in use' or some other f*cked up message that could not be fixed in any way. Sometimes i even had to totally delete the share with the sparsebundle and could only backup an initial, new backup, no incremental ones.
I ended up not using that stupid timemachine nonsense and am using Carbon Copy Cloner instead. Nicely scheduled once a week, sends an email when done, creates bootable backups if you wish. Works like a charm.
Wow.
I would actually prefer a disk imaging tool. Thank you for the suggestion, I will definitely look into it. Macrium is my goto tool in the winters
-
Unfortunately, when I mount the share via GUI the log explodes with same error.
I did pull this form errlog in .AppleDB
Finding last valid log LSN: file: 1 offset 28 mmap: No such device mmap: No such device mmap: No such device Finding last valid log LSN: file: 1 offset 28 mmap: No such device mmap: No such device mmap: No such device Finding last valid log LSN: file: 1 offset 28 mmap: No such device mmap: No such device mmap: No such device Finding last valid log LSN: file: 1 offset 28 mmap: No such device mmap: No such device
I am going to delete and try again
Add Share: macUP
Highwater
500MB min free
Split as needed
Disk4 assigned
AFP
Export Yes
Time Machine
512000MB limit
dbpath /mnt/user/system/AppleDBJan 7 02:49:30
nerdyRAID emhttp: shcmd (67308): mkdir '/mnt/user/macUP' |& logger Jan 7 02:49:30 nerdyRAID emhttp: shcmd (67309): chmod 0777 '/mnt/user/macUP' Jan 7 02:49:30 nerdyRAID emhttp: shcmd (67310): chown 'nobody':'users' '/mnt/user/macUP' Jan 7 02:50:55 nerdyRAID emhttp: shcmd (67380): mkdir -p /mnt/user/system/AppleDB Jan 7 02:50:55 nerdyRAID avahi-daemon[19967]: Files changed, reloading. Jan 7 02:50:55 nerdyRAID avahi-daemon[19967]: Service group file /services/afp.service changed, reloading. Jan 7 02:50:56 nerdyRAID avahi-daemon[19967]: Service "nerdyRAID-AFP" (/services/afp.service) successfully established.
And once I enter why credentials
Jan 7 03:04:38 nerdyRAID cnid_dbd[8125]: error opening DB environment after recovery: No such device Jan 7 03:04:38 nerdyRAID cnid_dbd[8125]: Failed to open CNID database for volume "macUP" Jan 7 03:04:38 nerdyRAID cnid_dbd[8125]: Recreated CNID BerkeleyDB databases of volume "macUP" Jan 7 03:04:38 nerdyRAID cnid_dbd[8125]: error opening DB environment after recovery: No such device Jan 7 03:04:38 nerdyRAID cnid_dbd[8125]: open_db() failed: No such device Jan 7 03:04:38 nerdyRAID cnid_dbd[8125]: reinit_db() failed: No such device Jan 7 03:04:38 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:39 nerdyRAID cnid_dbd[8135]: error opening DB environment after recovery: No such device Jan 7 03:04:39 nerdyRAID cnid_dbd[8135]: Failed to open CNID database for volume "macUP" Jan 7 03:04:39 nerdyRAID cnid_dbd[8135]: Recreated CNID BerkeleyDB databases of volume "macUP" Jan 7 03:04:39 nerdyRAID cnid_dbd[8135]: error opening DB environment after recovery: No such device Jan 7 03:04:39 nerdyRAID cnid_dbd[8135]: open_db() failed: No such device Jan 7 03:04:39 nerdyRAID cnid_dbd[8135]: reinit_db() failed: No such device Jan 7 03:04:39 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:40 nerdyRAID cnid_metad[8141]: Multiple attempts to start CNID db daemon for "/mnt/user/macUP" failed, wiping the slate clean... Jan 7 03:04:40 nerdyRAID cnid_dbd[8141]: error opening DB environment after recovery: No such device Jan 7 03:04:40 nerdyRAID cnid_dbd[8141]: Failed to open CNID database for volume "macUP" Jan 7 03:04:40 nerdyRAID cnid_dbd[8141]: Recreated CNID BerkeleyDB databases of volume "macUP" Jan 7 03:04:40 nerdyRAID cnid_dbd[8141]: error opening DB environment after recovery: No such device Jan 7 03:04:40 nerdyRAID cnid_dbd[8141]: open_db() failed: No such device Jan 7 03:04:40 nerdyRAID cnid_dbd[8141]: reinit_db() failed: No such device Jan 7 03:04:40 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:41 nerdyRAID cnid_dbd[8150]: error opening DB environment after recovery: No such device Jan 7 03:04:41 nerdyRAID cnid_dbd[8150]: Failed to open CNID database for volume "macUP" Jan 7 03:04:41 nerdyRAID cnid_dbd[8150]: Recreated CNID BerkeleyDB databases of volume "macUP" Jan 7 03:04:41 nerdyRAID cnid_dbd[8150]: error opening DB environment after recovery: No such device Jan 7 03:04:41 nerdyRAID cnid_dbd[8150]: open_db() failed: No such device Jan 7 03:04:41 nerdyRAID cnid_dbd[8150]: reinit_db() failed: No such device Jan 7 03:04:41 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:42 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:43 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:44 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:45 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:46 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:47 nerdyRAID afpd[8059]: read: Connection reset by peer Jan 7 03:04:47 nerdyRAID afpd[8059]: transmit: Request to dbd daemon (volume macUP) timed out. Jan 7 03:04:47 nerdyRAID afpd[8059]: afp_openvol(/mnt/user/macUP): Fatal error: Unable to get stamp value from CNID backend Jan 7 03:04:47 nerdyRAID afpd[8059]: dsi_stream_read: len:0, unexpected EOF
-
Understood.
Its a shame, since HFS+ as a FS is getting long in the tooth.
Reading suggests that AFP could be deprecated over the next few point updates (in favor of SMB)...though that is speculation. The announcement of the new AFS will hopefully bring a robust set of conditions for network based disk imaging and incremental backup.
I will try your suggestions later this evening.
I will also try your folder structure...
/mnt/user/system/AppleDB
the folder /macUP will then be recreated with .AppleDB nested within it, correct?
I used
rm -rf
to delete .AppleDB and MC to delete the share.
DISK read errors: Parity
in General Support
Posted
I actually did that after I posted <facepalm> ((saw the errors at end of syslog).
I will run a parity check and, confirm. Are the read errors symptomatic of impending failure?