Skip to main content

Synology DS1815+ with Crashplan

I recently bought a very nice new Synology DS1815+ to replace my self-made NAS which was based on a HP Miniserver and OpenMediaVault. Although the performance was no issue, I had to spend too much work in fixing thinks after software upgrades, apt-get this and apt-get that, etc.


So, here we are are: A brand new Synology DS1815+. It comes with
  • 8 bays (extendable to 18 bays with 2 extension-boxes with 6 additional bays each)
  • 2 GB RAM (extenable to 16GB)
  • 2.4 GHz (Atom C2538)
  • A management UI that helps to run your NAS smoothly
A own an existing CrashPlan+ accounts which means I pay $$$ to backup all my data to their datacenter. Fortunately, there is already a Package that helps installing Crashplan on a Synology NAS.


I used the installation guide from and worked with Mike Tabor's guide when it comes to the windows client installation However, installing Java explicitly was not needed.

Crashplan also describes the steps needed to install Crashplan headless and control the service from a remote computer here: Using CrashPlan On A Headless Computer. Unfortunately there is no Web-UI available, so you have to stick with the Java-Client.

Adopt old NAS

After logging in, the application offered me the option to adopt the computer a reuse the existing configuration. In addition to that, I had to enter my encription key that I setup on my previous NAS (did I mention Crashplan encrypts all data locally, or at least says so...)
Adopting a computer makes things lot easier when the folder-paths didn't change, which was not in the case for me. But I had all my backup configuration back at least. See Crashplan docs: Adopting A Computer With A Different Operating System Or File System.

From the documentation
Once the adoption has completed and all previous settings have been applied, CrashPlan will report the old locations as "missing". By updating the file selection to include both the old locations and the new locations of your files (e.g., under a folder with a different user name), CrashPlan will de-duplicate the data.

So, just add the new folder-paths to the already existing backup configurations an let Crashplan to his Job.

Backup Large Files

Crashplan is not optimized to backup large files over fast networks. It' tries to keep the amount of data that needs to transfer as low as possible by de-duplicate all data. This really eats CPU ressources, especially for large files. I was affected by that since I backup large and was not able to upload data at optimized speeds (I have a 100MBits connection, yay!). Some other users reported that to CrashPlan and they found a way to completely disable de-dublication by patching the configuration XML for your backup sets:

With crashplan on Synology, the configuration files are stored in /volume1/@appstore/CrashPlan/conf/my.service.xml

In order to edit the file you might need to chmod the file to 0777.

Changing <dataDeDupAutoMaxFileSizeForWan> actually disables DeDub for all files large than 1 byte.

After that change, I experienced a constant upload speed around 50MB/s for more than 10 days.

Backup Large Files - Part 2

Depending on the size of you backup or the number of files Crashplan suggest to adjust the maxium heap size for the CrashPlan Service according to the the list below.
  • Up to 1 TB or up to 1 million files 1024MB (default)
  • 1.5 TB or 1.5 million files 1536MB
  • 2 TB or 2 million files 2048MB
  • 2.5 TB or 2.5 million files 2560MB
  • 3 TB or 3 million files 3072MB
You might run in this issue if your desktop-application without further notice and the Crashplan service restarts often. Also have a look into the logfiles located in /volume1/@appstore/CrashPlan/logs. The file service.log.0 is always the current logfile. I found messages like

[02.28.17 18:33:44.657 ERROR ub-BackupMgr .service.backup.BackupController] OutOfMemoryError occurred...RESTARTING! message=OutOfMemoryError in BackupQueue! FileTodo[fileTodoIndex = FileTodoIndex[backupFile=BackupFile[9402b4c3bc13d7d94c332778b2785e80, parent=1a6c7d5473e7f2fda60ccd3342a6cc0a, type=1, sourcePath=/volume1/Backup/Continous/WS-Michael/WindowsImageBackup/WS-Michael/Backup 2017-02-26 180012], newFile=true, state=NORMAL, sourceLength=0, sourceLastMod=1488151395000], lastVersion = null, startTime = 1488303169489, doneAnalyzing = false, numSourceBytesAnalyzed = 0, doneSending = false, %completed = 100.00%, numSourceBytesCompleted = 0, isMetadataOnly = false], source=BackupQueue[656452255095980289>42, running=t, #tasks=0, sets=[BackupFileTodoSet[backupSetId=662351273055813757, guid=42, doneLoadingFiles=t, doneLoadingTasks=f, FileTodoSet@1146110426[ path = /volume1/@appstore/CrashPlan/cache/cpft662351273055813757_42, closed = false, dataSize = 161623, headerSize = 0], numTodos = 1046, numBytes = 399328285205]], BackupFileTodoSet[backupSetId=1, guid=42, doneLoadingFiles=t, doneLoadingTasks=f, FileTodoSet@53672476[ path = /volume1/@appstore/CrashPlan/cache/cpft1_42, closed = false, dataSize = 45652, headerSize = 0], numTodos = 241, numBytes = 158298842385]], BackupFileTodoSet[backupSetId=662535054555414653, guid=42, doneLoadingFiles=f, doneLoadingTasks=f, FileTodoSet@450868664[ path = /volume1/@appstore/CrashPlan/cache/cpft662535054555414653_42, closed = false, dataSize = 12, headerSize = 0], numTodos = 0, numBytes = 0]]], env=BackupEnv[envTime = 1488303169437, near = false, todoSharedMemory = SharedMemory[b.length = 2359296, allocIndex = -1, freeIndex = 0, closed = false, waitingAllocLength = 0], taskSharedMemory = SharedMemory[b.length = 2359296, allocIndex = -1, freeIndex = 0, closed = false, waitingAllocLength = 0]], TodoWorker@1180241360[ threadName = BQTodoWkr-42, stopped = false, running = true, thread.isDaemon = false, thread.isAlive = true, thread = Thread[W1066196265_BQTodoWkr-42,5,main]], TaskWorker@1819508699[ threadName = BQTaskWrk-42, stopped = false, running = true, thread.isDaemon = false, thread.isAlive = true, thread = Thread[W1605338114_BQTaskWrk-42,5,main]]], oomStack=java.lang.OutOfMemoryError: Java heap space
    at gnu.trove.impl.hash.THash.postInsertHook(
    at com.code42.backup.manifest.WeakIndex.index(
    at com.code42.backup.manifest.BlockLookupCache$WeakCache.buildIndex(
    at com.code42.backup.manifest.BlockLookupCache$WeakCache.containsWeak(
The article also shows how to change these settings by using the "Hidden-UI-Console" by double-clicking the Crashplan logo in the app. Unfortunately this does not work if Crashplan was installed as Synology Package.

Solution: Change the settings in the file /volume1/@appstore/CrashPlan/syno_package.vars. The example below shows 1.5GB of RAM

#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)
#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files

After restarting the service in the Package-Management Console of your synology you should see that the Crashplan is now able to consume more memory. Increase the value if needed if you still have spikes as you might notice graph below. These where all caused by frequent restarts of the whole application. The memory consumption became more stable after the change.

Additional Memory has already been ordered! :-)


Popular posts from this blog

Home Assistant in Docker with Nginx and Let's Encrypt on Raspberry Pi

Use Bodmer TFT_eSPI Library with PlatformIO

Migrating from Arduino IDE to Visual Studio Code to PlatformIO