Jump to content
Mikael Kalms

Error during sync to git: Cannot access a disposed object.

Recommended Posts


when I synchronize a large repository (~5GB workspace) between a local Plastic server and a local Git server, I get the following error:

root@e620c45ed378:~# cm sync FreedomPrototype git git@git-server:repos/FreedomPrototype.git

- /main: Local changes
- /main/Testbranch/Testbranch2: Local changes
- /main/Testbranch: Local changes
Receiving references... /
- 0 changesets to pull
- 1146 changesets to push

Receiving references... OK
There are changes to push.
Exporting... OK1145/1146
Packaging...... - 0/24743Error: Cannot access a disposed object.
Object name: 'handle'.

I am using the latest Plastic server version:

root@e620c45ed378:/# cm version

The Plastic server runs on an Ubuntu 14.04.


A replica of the repository resides in Plastic Cloud, is successfully replicated to another Plastic server, and incrementally (every minute) synced to another Git server. I think the problem here is that the Plastic->Git sync falls over when the git sync needs to sync all changesets on the branch in one go.

Debugging tips please?

Share this post

Link to post
Share on other sites


Turns out this is not a Plastic problem:

- I'm running the local Plastic server and the local Git server in Docker containers, on Windows 10

- The Hyper-V isolation that Docker uses under Windows 10 has a hard limit of max 1GB memory allocation per container (https://github.com/moby/moby/issues/31604)

- Plastic happens to use more than 1GB of memory during the "Packaging" step in the GitSync process


So -- the Plastic process is running out of memory, and the limit is set unreasonably low on my development machine, that's all. This will not happen in our production system since that uses Linux as host OS. I haven't yet found a way to raise the 1GB limit under Windows.

Share this post

Link to post
Share on other sites

Oh! Thank you for taking the time to debug the issue and returning with the answer!

I wasn't aware of the memory limitation of the docker containers, this is value information for the rest of the community. If you finally find out how to increase the 1GB limit please come back to us so we can learn it too.

Share this post

Link to post
Share on other sites

Yet further digging made me realize that it was another 1GB limitation that I hit.

Here are the steps that I had to go through.


Pick your Docker version

Docker comes in several forms:

- Docker for Windows / Docker for Mac. These Docker versions integrate a lot with the OS & desktop, and use OS-specific virtualization mechanisms (Hyper-V on Windows / MacOS Hypervisor framework). The OS & virtualization integration require these to be run on Windows 10 Pro and upward, and OSX Yosemite 10.10.3 or above.

- Docker Toolbox for Windows / Docker Toolbox for Mac. These Docker versions do not integrate as much with the OS & desktop, and rely on VirtualBox for VM management. Since they are not as tightly integrated into the OS they work on a broader range of OS versions (Win7 and, OSX Mountain Lion 10.8 or newer).

- Docker for Linux comes in only one form. It uses LXC for virtualization.

I usually use Docker Toolbox for Windows for my workstations since that gives me the same development environment at home as at work.


Set OS virtualization support switches appropriately

In order to use any of these virtualization technologies, the machine must have virtualization enabled in BIOS. It is normally referred to as Intel Virtualization Technology or AMT-V.

In addition, if the user wants to use Hyper-V virtualization in Windows 10 Pro, the user must also install the Hyper-V role.

Virtualbox on the other hand requires that Hyper-V is disabled - if the user tries to start a Virtualbox VM when Hyper-V is enabled the machine will bluescreen.


Change VM memory size settings

For Hyper-V the VM that runs containers will default to 1GB in size. See this thread, this thread and this documentation article for background information. I haven't tried this out myself so I don't know for sure what will help.

For Virtualbox the VM that runs containers will also default to 1GB in size. If you launch the Oracle VM Virtualbox GUI then you will notice that Docker Toolbox has created a VM called "default". You can go into the settings of this VM and increase the memory size to something higher. (If the VM is already running, you need to stop it before you can make changes to it.)

Once you have changed the VM memory size, you should be able to see the increased memory limit if you start a container and then do "docker top <mycontainer>".

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now