Jump to content

Mikael Kalms

Members
  • Content count

    33
  • Joined

  • Last visited

  • Days Won

    10

Mikael Kalms last won the day on March 17

Mikael Kalms had the most liked content!

Community Reputation

1 Neutral

About Mikael Kalms

  • Rank
    Advanced Member
  1. Error during sync to git: Cannot access a disposed object.

    Yet further digging made me realize that it was another 1GB limitation that I hit. Here are the steps that I had to go through. Pick your Docker version Docker comes in several forms: - Docker for Windows / Docker for Mac. These Docker versions integrate a lot with the OS & desktop, and use OS-specific virtualization mechanisms (Hyper-V on Windows / MacOS Hypervisor framework). The OS & virtualization integration require these to be run on Windows 10 Pro and upward, and OSX Yosemite 10.10.3 or above. - Docker Toolbox for Windows / Docker Toolbox for Mac. These Docker versions do not integrate as much with the OS & desktop, and rely on VirtualBox for VM management. Since they are not as tightly integrated into the OS they work on a broader range of OS versions (Win7 and, OSX Mountain Lion 10.8 or newer). - Docker for Linux comes in only one form. It uses LXC for virtualization. I usually use Docker Toolbox for Windows for my workstations since that gives me the same development environment at home as at work. Set OS virtualization support switches appropriately In order to use any of these virtualization technologies, the machine must have virtualization enabled in BIOS. It is normally referred to as Intel Virtualization Technology or AMT-V. In addition, if the user wants to use Hyper-V virtualization in Windows 10 Pro, the user must also install the Hyper-V role. Virtualbox on the other hand requires that Hyper-V is disabled - if the user tries to start a Virtualbox VM when Hyper-V is enabled the machine will bluescreen. Change VM memory size settings For Hyper-V the VM that runs containers will default to 1GB in size. See this thread, this thread and this documentation article for background information. I haven't tried this out myself so I don't know for sure what will help. For Virtualbox the VM that runs containers will also default to 1GB in size. If you launch the Oracle VM Virtualbox GUI then you will notice that Docker Toolbox has created a VM called "default". You can go into the settings of this VM and increase the memory size to something higher. (If the VM is already running, you need to stop it before you can make changes to it.) Once you have changed the VM memory size, you should be able to see the increased memory limit if you start a container and then do "docker top <mycontainer>".
  2. Error during sync to git: Cannot access a disposed object.

    Thanks. Turns out this is not a Plastic problem: - I'm running the local Plastic server and the local Git server in Docker containers, on Windows 10 - The Hyper-V isolation that Docker uses under Windows 10 has a hard limit of max 1GB memory allocation per container (https://github.com/moby/moby/issues/31604) - Plastic happens to use more than 1GB of memory during the "Packaging" step in the GitSync process So -- the Plastic process is running out of memory, and the limit is set unreasonably low on my development machine, that's all. This will not happen in our production system since that uses Linux as host OS. I haven't yet found a way to raise the 1GB limit under Windows.
  3. Hi, when I synchronize a large repository (~5GB workspace) between a local Plastic server and a local Git server, I get the following error: root@e620c45ed378:~# cm sync FreedomPrototype git git@git-server:repos/FreedomPrototype.git - /main: Local changes - /main/Testbranch/Testbranch2: Local changes - /main/Testbranch: Local changes Receiving references... / - 0 changesets to pull - 1146 changesets to push Receiving references... OK There are changes to push. Exporting... OK1145/1146 Packaging...... - 0/24743Error: Cannot access a disposed object. Object name: 'handle'. I am using the latest Plastic server version: root@e620c45ed378:/# cm version 6.0.16.1614 The Plastic server runs on an Ubuntu 14.04. A replica of the repository resides in Plastic Cloud, is successfully replicated to another Plastic server, and incrementally (every minute) synced to another Git server. I think the problem here is that the Plastic->Git sync falls over when the git sync needs to sync all changesets on the branch in one go. Debugging tips please?
  4. Plastic Change Tracker feedback

    Hi, and just FYI, I'm running Labs release 6.0.16.1168. I have had the Plastic Change Tracker service active for some days, but will disable for the time being. I notice that when editing text files with Visual Studio 2017, the change tracker service will see Visual Studio's file modifications as rename + delete + add operations (probably because that is what VS does under the hood). See attached image for an example. I have edited BoardLogic.cs & BoardPresentation.cs in this situation.
  5. License file required for local server?

    Update: I requested a Personal Edition license, received it, and installed it on my server machine. It works nicely!
  6. Hi, as part of the "Plastic Cloud to Unity Cloud Build" bridge we're running (https://github.com/falldamagestudio/plastic-cloud-to-ucb/) we run a regular Plastic server + a Git server on a VM in Google's cloud. It has run correctly for about 50 days, but today any cm commands print the message "Limited by days evaluation license has expired.". Do I need to install a new license file to make this work for more than 1-2 months at a time? If so how do I obtain that license? There is no plasticd.token.lic file on the machine. There is a plasticd.lic file -- I believe that the server created that itself during setup. Info when I try showing license details: root@f75b48db5f23:/opt/plasticscm5/server# cm li Limited by days evaluation license has expired. The Plastic server was installed on the box by performing: echo "deb http://www.plasticscm.com/plasticrepo/plasticscm-common/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/plastic.list echo "deb http://www.plasticscm.com/plasticrepo/plasticscm-latest/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/plastic.list wget -q http://www.plasticscm.com/plasticrepo/plasticscm-common/Ubuntu_14.04/Release.key -O - | apt-key add - wget -q http://www.plasticscm.com/plasticrepo/plasticscm-latest/Ubuntu_14.04/Release.key -O - | apt-key add - DEBIAN_FRONTEND=noninteractive apt-get -q update && apt-get install -y -q plasticscm-complete && plasticsd stop clconfigureserver --language=en --port=8087 --workingmode=UPWorkingMode + more commands to create *.conf files I only want to run commands locally against the Plastic server. It will connect to Plastic Cloud, and to a local Git server (via GitSync). Our organization is currently purely subscribed as Plastic Cloud users so when I check https://www.plasticscm.com/download/dashboard I don't see any appropriate license files to download for a regular Plastic server.
  7. True, https://plasticscm.uservoice.com/forums/15467-general/suggestions/15615156-show-diffs-in-branch-explorer-and-changesets-views is a matching UserVoice suggestion. That works, sort of. Thanks! Not as convenient as a docked view though.
  8. Hi, when I navigate around in the Branch explorer, I often find that I would like to use it to understand how the set of files have changed as I move along the changesets of a branch, or a pair of branches. More formalized: I would like to be able to browse history, in a top-down manner (branches -> changesets -> files -> edits within files), and I would like to be able to quickly move around in that history, and see information about branches + changesets + files in the same view (without having to do more than one click per changeset, or opening many windows). Is this already supported with the current UI? If not, one way could be to let part of the Branch Explorer be set (optionally) to show the list of modified files, kind of like in the attached image mockup. Right now I use the following approaches: 1) Select changesets, one by one, and press Ctrl-D on them, then look through the list of changed files in each of the windows that now open. This is convolute 2) Right-click on on the branch, choose "View > Explore changesets on this branch", look in the branch explorer and memorize which range of changesets I am interested in, scroll through the set of changesets to find the right range I am interested in, inspect these changesets Both approaches work - but they are more effort than I would like them to be.
  9. Problems with 'cm find' on Linux

    Thanks, that works for me. I can use the command now. This was not at all obvious from the documentation though. Performing "cm find --help" on the command line on the linux box shows: root@a863dafd279b:/# cm find --help Perform queries to obtain objects. Usage: cm find object_type [where str_conditions] [on repository 'rep_spec' | on repositories <'rep_spec'>+] [--format=str_format] [--dateformat=date_format] [--nototal] [--file=dump_file] [--xml] [--encoding=name] object_type Object type to find. (See the 'CM FIND GUIDE' to see all the objects to find.) ... etc etc ... and if I compare this to another command: root@a863dafd279b:/# cm replicate --help Push or pull a branch to another repo. Usage: cm replicate src_br_spec dst_rep_spec [--push] [--preview] [TranslateOptions] [--user=usr_name [--password=pwd] | AuthOptions] (Direct server-to-server replication. Replicates a branch to a repository.) cm replicate src_br_spec --package=pack_file [AuthOptions] (Package based replication. Creates a replication package in the source server with the selected branch.) cm replicate dst_rep_spec --import=pack_file [TranslateOptions] (Package based replication. Imports the package in the destination server.) ... then I take it that "cm find" wants a single text-string argument whereas "cm replicate" wants multiple arguments? I was unable to see a difference like that from the docs themselves, nor from the "cm find" example page (https://www.plasticscm.com/documentation/cmfind/plastic-scm-version-control-query-system-guide.shtml) ...
  10. Problems with 'cm find' on Linux

    Hi, I would like to view the structure of a repository via the commandline, but am running into problems early on. First I would like to list all existing branches. The following command works on my Windows workstation, but not on a Linux machine (note, it runs inside a Docker container on an Azure VM): cm find branch on repository 'branchname' Windows example: C:\Plastic>cm version 5.4.16.809 C:\Plastic>cm find branch on repository '<reponame>' <a number of lines with information on all branches in <reponame>> Linux example: root@a863dafd279b:/# cm version 6.0.16.884 root@a863dafd279b:/# cm find branch on repository '<reponame>' Query error: expecting "STRING", found '<reponame>' I have tried various forms but I keep getting query errors. Never do I manage to get it to say that the repo in question does not exist. (If I create a workspace from the repository on the Linux machine, then I can enter the workspace directory, perform 'cm find branch' and it will print info on the branches.) Perhaps this is something 6.0.x specific? Any ideas?
  11. cm replicate stays forever at CalculatingInitialChangeset

    Update: This is indeed an interaction between Docker and Google Compute Engine. GCE has a network-wide MTU of 1460. Docker ignores this and sets up a bunch of extra network interfaces with MTU 1500. I'm not sure but I expect that GCE machines have Large Segment Offload active. These three factors combined make it so that if an application (such as Plastic) attempts to send a lot of data via TCP, some fragments of the TCP communications will get dropped. Net result: Plastic local server -> Plastic Cloud communications becomes unreliable. I have made things work in our case by manually forcing the MTU for all network interfaces related to the Docker containers to 1460: https://github.com/falldamagestudio/plastic-cloud-to-ucb/commit/91faf02257ea4d41531541d9000277d36f268668 Issue log: https://github.com/falldamagestudio/plastic-cloud-to-ucb/issues/15
  12. Plastic Cloud backup/restore policies?

    Thanks, appreciated.
  13. Plastic Cloud backup/restore policies?

    Hi, what are the backup/restore policies for Plastic Cloud? More specifically: If we delete an entire repository in Plastic Cloud, can we have the repository restored later? Are there any limitations (time limits etc) we need to be aware of? If content within a repository is deleted (example: a single changeset or a single branch is deleted), can we have that single changeset or branch restored later? Are there any limitations we need to be aware of?
  14. Does Plastic support Unity Cloud Build yet?

    For those interested, we have built a bridge from Plastic Cloud to UCB that we run in a VM in Azure: https://github.com/falldamagestudio/plastic-cloud-to-ucb/ It works, and has appropriate authentication, but takes a lot of effort to configure correctly.
  15. cm replicate stays forever at CalculatingInitialChangeset

    Results after support session: Running the Plastic software inside a Docker container, on a Docker host, on a VM, on Google Compute Engine, has networking problems when communicating with Plastic Cloud. It can connect to the Plastic Cloud servers and execute some commands, but "cm replicate" fails with obscure timeouts. Running the Plastic software inside a Docker container, on a Docker host, on a VM, on Azure, works fine when communicating with Plastic Cloud. <= this is what we will do for the time being.
×