Jump to content

Mikael Kalms

  • Content count

  • Joined

  • Last visited

  • Days Won


Mikael Kalms last won the day on March 17

Mikael Kalms had the most liked content!

Community Reputation

1 Neutral

About Mikael Kalms

  • Rank
    Advanced Member
  1. revInfo Owner is null before the cache

    I guess you synchronized the repository where you had the initial failure and then you retry the replica using the command line, right? Yes. I also performed 'cm checkdatabase' -- this confirmed that the DB was not consistent. So I think there are two things, 1) I am having problems pulling and 2) the "cm replicate" command will, under some circumstances, result in inconsistent DB contents. At least when working against .sqlite format DBs. Even despite it using a transaction-based design (right?) The problems with pulling prevent me from "trying again": I have not yet succeeded in pulling down the repository to that computer in one go. I will do tests with a laptop in my home later; it will be connected over the same home Wifi. Will get back to you with info on how well that works. I have done a full "cm replicate" from the Cloud repo to a local repo on my office workstation now. That replication succeeded without any errors.
  2. revInfo Owner is null before the cache

    Hi, I am seeing similar symptoms on my home machine. I needed to pull down an entire new repository from Plastic Cloud to my home machine. Due to unknown reasons I had problems both downloading an updated Plastic Cloud installer. The download failed about 5 times throughout the 160MB download. This could either be my home network - the machine is connected to a home router over a wireless network - or problems with my ISP/Azure. I have not seen similar problems at our office. Anyhow, after installing a new version of the Plastic SCM client, I created a new sync view and attempted to pull a repository containing perhaps 250-500 branches and ~8GB of data. This failed within minutes. I resorted to doing "cm replicate /main/<child branch>@...@cloud <localrepo>@local" and fetching 1-2 weeks' of work at a time, as otherwise the command-line replication would also fail with messages like "The data for revision 236293 (segment 3) cannot be read from rep 8865.". I also interrupted a few replication operations with Ctrl-C. At one point, "cm replicate" complained that a changeset was locked. I stopped the local Plastic server and restarted it. The "cm replicate" operation was now capable of performing replications again. After I had replicated most of the branches, I went into the Plastic SCM GUI and performed a full sync via the sync view. No errors this time. After this, if I attempt to update the workspace (which previously was empty), after a few seconds I get a popup saying that the Plastic client failed updating certain files on my machine, with the reason "revInfo.Owner is null before the cache". Redoing the operation or forcing the operation does not help. I get messages like these in my server's plastic.debug.log.txt: 2017-11-30 16:44:47,829 W-39 00000000-0000-0000-0000-000000000000 kalms@falldamagestudio.com KALMSHOMEDESKTO DEBUG Transaction - Begin implicit transaction C:\Program Files\PlasticSCM5\server\rep_5.plastic.sqlite 2017-11-30 16:44:47,840 W-39 00000000-0000-0000-0000-000000000000 kalms@falldamagestudio.com KALMSHOMEDESKTO DEBUG BranchExplorerQuery - Read 286 branches 0ms 2017-11-30 16:44:47,871 W-39 00000000-0000-0000-0000-000000000000 kalms@falldamagestudio.com KALMSHOMEDESKTO DEBUG Security - SEID with id 494 was not found 2017-11-30 16:44:47,871 W-39 00000000-0000-0000-0000-000000000000 kalms@falldamagestudio.com KALMSHOMEDESKTO DEBUG Security - SEID with id 494 was not found 2017-11-30 16:44:47,871 W-39 00000000-0000-0000-0000-000000000000 kalms@falldamagestudio.com KALMSHOMEDESKTO DEBUG Security - SEID with id 494 was not found 2017-11-30 16:44:47,871 W-39 00000000-0000-0000-0000-000000000000 kalms@falldamagestudio.com KALMSHOMEDESKTO DEBUG Security - SEID with id 494 was not found I will make full logs available to you via email tomorrow. This does not block me from working. Actual questions to you: - Do you want more material for debugging? like, a snapshot of the SQLite database? (please be specific in which files in that case) - Is there a sure-fire way to get rid of this problem? other than, say, deleting the local replica and replicating it over again?
  3. Hi, this is mostly FYI. I have recently encountered the following error message during a pull operation: The sync process was unable to replicate the following branch: Branch: <branch name> Operation: Pull Source repository: <repo>@<organization>@cloud Destination repository: <repo>@local Error description: Explicit transaction expected, but found no transaction This happened earlier this week on a machine running Plastic SCM and just now on another machine using Plastic SCM The pull operation was pulling from Plastic Cloud to a server on my own workstation/laptop. I got this error message for two branches. Choosing "Retry" would successfully pull those branches. Several other branches were also included in the Pull operation; those pulled without any problems. Since then, I have performed more pull operations to pull newer changes from at least one of those branches, with no errors. (Suspicion: there was a small-scale, temporary hiccup with Plastic Cloud.)
  4. Error during sync to git: Cannot access a disposed object.

    Yet further digging made me realize that it was another 1GB limitation that I hit. Here are the steps that I had to go through. Pick your Docker version Docker comes in several forms: - Docker for Windows / Docker for Mac. These Docker versions integrate a lot with the OS & desktop, and use OS-specific virtualization mechanisms (Hyper-V on Windows / MacOS Hypervisor framework). The OS & virtualization integration require these to be run on Windows 10 Pro and upward, and OSX Yosemite 10.10.3 or above. - Docker Toolbox for Windows / Docker Toolbox for Mac. These Docker versions do not integrate as much with the OS & desktop, and rely on VirtualBox for VM management. Since they are not as tightly integrated into the OS they work on a broader range of OS versions (Win7 and, OSX Mountain Lion 10.8 or newer). - Docker for Linux comes in only one form. It uses LXC for virtualization. I usually use Docker Toolbox for Windows for my workstations since that gives me the same development environment at home as at work. Set OS virtualization support switches appropriately In order to use any of these virtualization technologies, the machine must have virtualization enabled in BIOS. It is normally referred to as Intel Virtualization Technology or AMT-V. In addition, if the user wants to use Hyper-V virtualization in Windows 10 Pro, the user must also install the Hyper-V role. Virtualbox on the other hand requires that Hyper-V is disabled - if the user tries to start a Virtualbox VM when Hyper-V is enabled the machine will bluescreen. Change VM memory size settings For Hyper-V the VM that runs containers will default to 1GB in size. See this thread, this thread and this documentation article for background information. I haven't tried this out myself so I don't know for sure what will help. For Virtualbox the VM that runs containers will also default to 1GB in size. If you launch the Oracle VM Virtualbox GUI then you will notice that Docker Toolbox has created a VM called "default". You can go into the settings of this VM and increase the memory size to something higher. (If the VM is already running, you need to stop it before you can make changes to it.) Once you have changed the VM memory size, you should be able to see the increased memory limit if you start a container and then do "docker top <mycontainer>".
  5. Error during sync to git: Cannot access a disposed object.

    Thanks. Turns out this is not a Plastic problem: - I'm running the local Plastic server and the local Git server in Docker containers, on Windows 10 - The Hyper-V isolation that Docker uses under Windows 10 has a hard limit of max 1GB memory allocation per container (https://github.com/moby/moby/issues/31604) - Plastic happens to use more than 1GB of memory during the "Packaging" step in the GitSync process So -- the Plastic process is running out of memory, and the limit is set unreasonably low on my development machine, that's all. This will not happen in our production system since that uses Linux as host OS. I haven't yet found a way to raise the 1GB limit under Windows.
  6. Hi, when I synchronize a large repository (~5GB workspace) between a local Plastic server and a local Git server, I get the following error: root@e620c45ed378:~# cm sync FreedomPrototype git git@git-server:repos/FreedomPrototype.git - /main: Local changes - /main/Testbranch/Testbranch2: Local changes - /main/Testbranch: Local changes Receiving references... / - 0 changesets to pull - 1146 changesets to push Receiving references... OK There are changes to push. Exporting... OK1145/1146 Packaging...... - 0/24743Error: Cannot access a disposed object. Object name: 'handle'. I am using the latest Plastic server version: root@e620c45ed378:/# cm version The Plastic server runs on an Ubuntu 14.04. A replica of the repository resides in Plastic Cloud, is successfully replicated to another Plastic server, and incrementally (every minute) synced to another Git server. I think the problem here is that the Plastic->Git sync falls over when the git sync needs to sync all changesets on the branch in one go. Debugging tips please?
  7. Plastic Change Tracker feedback

    Hi, and just FYI, I'm running Labs release I have had the Plastic Change Tracker service active for some days, but will disable for the time being. I notice that when editing text files with Visual Studio 2017, the change tracker service will see Visual Studio's file modifications as rename + delete + add operations (probably because that is what VS does under the hood). See attached image for an example. I have edited BoardLogic.cs & BoardPresentation.cs in this situation.
  8. License file required for local server?

    Update: I requested a Personal Edition license, received it, and installed it on my server machine. It works nicely!
  9. Hi, as part of the "Plastic Cloud to Unity Cloud Build" bridge we're running (https://github.com/falldamagestudio/plastic-cloud-to-ucb/) we run a regular Plastic server + a Git server on a VM in Google's cloud. It has run correctly for about 50 days, but today any cm commands print the message "Limited by days evaluation license has expired.". Do I need to install a new license file to make this work for more than 1-2 months at a time? If so how do I obtain that license? There is no plasticd.token.lic file on the machine. There is a plasticd.lic file -- I believe that the server created that itself during setup. Info when I try showing license details: root@f75b48db5f23:/opt/plasticscm5/server# cm li Limited by days evaluation license has expired. The Plastic server was installed on the box by performing: echo "deb http://www.plasticscm.com/plasticrepo/plasticscm-common/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/plastic.list echo "deb http://www.plasticscm.com/plasticrepo/plasticscm-latest/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/plastic.list wget -q http://www.plasticscm.com/plasticrepo/plasticscm-common/Ubuntu_14.04/Release.key -O - | apt-key add - wget -q http://www.plasticscm.com/plasticrepo/plasticscm-latest/Ubuntu_14.04/Release.key -O - | apt-key add - DEBIAN_FRONTEND=noninteractive apt-get -q update && apt-get install -y -q plasticscm-complete && plasticsd stop clconfigureserver --language=en --port=8087 --workingmode=UPWorkingMode + more commands to create *.conf files I only want to run commands locally against the Plastic server. It will connect to Plastic Cloud, and to a local Git server (via GitSync). Our organization is currently purely subscribed as Plastic Cloud users so when I check https://www.plasticscm.com/download/dashboard I don't see any appropriate license files to download for a regular Plastic server.
  10. True, https://plasticscm.uservoice.com/forums/15467-general/suggestions/15615156-show-diffs-in-branch-explorer-and-changesets-views is a matching UserVoice suggestion. That works, sort of. Thanks! Not as convenient as a docked view though.
  11. Hi, when I navigate around in the Branch explorer, I often find that I would like to use it to understand how the set of files have changed as I move along the changesets of a branch, or a pair of branches. More formalized: I would like to be able to browse history, in a top-down manner (branches -> changesets -> files -> edits within files), and I would like to be able to quickly move around in that history, and see information about branches + changesets + files in the same view (without having to do more than one click per changeset, or opening many windows). Is this already supported with the current UI? If not, one way could be to let part of the Branch Explorer be set (optionally) to show the list of modified files, kind of like in the attached image mockup. Right now I use the following approaches: 1) Select changesets, one by one, and press Ctrl-D on them, then look through the list of changed files in each of the windows that now open. This is convolute 2) Right-click on on the branch, choose "View > Explore changesets on this branch", look in the branch explorer and memorize which range of changesets I am interested in, scroll through the set of changesets to find the right range I am interested in, inspect these changesets Both approaches work - but they are more effort than I would like them to be.
  12. Problems with 'cm find' on Linux

    Thanks, that works for me. I can use the command now. This was not at all obvious from the documentation though. Performing "cm find --help" on the command line on the linux box shows: root@a863dafd279b:/# cm find --help Perform queries to obtain objects. Usage: cm find object_type [where str_conditions] [on repository 'rep_spec' | on repositories <'rep_spec'>+] [--format=str_format] [--dateformat=date_format] [--nototal] [--file=dump_file] [--xml] [--encoding=name] object_type Object type to find. (See the 'CM FIND GUIDE' to see all the objects to find.) ... etc etc ... and if I compare this to another command: root@a863dafd279b:/# cm replicate --help Push or pull a branch to another repo. Usage: cm replicate src_br_spec dst_rep_spec [--push] [--preview] [TranslateOptions] [--user=usr_name [--password=pwd] | AuthOptions] (Direct server-to-server replication. Replicates a branch to a repository.) cm replicate src_br_spec --package=pack_file [AuthOptions] (Package based replication. Creates a replication package in the source server with the selected branch.) cm replicate dst_rep_spec --import=pack_file [TranslateOptions] (Package based replication. Imports the package in the destination server.) ... then I take it that "cm find" wants a single text-string argument whereas "cm replicate" wants multiple arguments? I was unable to see a difference like that from the docs themselves, nor from the "cm find" example page (https://www.plasticscm.com/documentation/cmfind/plastic-scm-version-control-query-system-guide.shtml) ...
  13. Problems with 'cm find' on Linux

    Hi, I would like to view the structure of a repository via the commandline, but am running into problems early on. First I would like to list all existing branches. The following command works on my Windows workstation, but not on a Linux machine (note, it runs inside a Docker container on an Azure VM): cm find branch on repository 'branchname' Windows example: C:\Plastic>cm version C:\Plastic>cm find branch on repository '<reponame>' <a number of lines with information on all branches in <reponame>> Linux example: root@a863dafd279b:/# cm version root@a863dafd279b:/# cm find branch on repository '<reponame>' Query error: expecting "STRING", found '<reponame>' I have tried various forms but I keep getting query errors. Never do I manage to get it to say that the repo in question does not exist. (If I create a workspace from the repository on the Linux machine, then I can enter the workspace directory, perform 'cm find branch' and it will print info on the branches.) Perhaps this is something 6.0.x specific? Any ideas?
  14. cm replicate stays forever at CalculatingInitialChangeset

    Update: This is indeed an interaction between Docker and Google Compute Engine. GCE has a network-wide MTU of 1460. Docker ignores this and sets up a bunch of extra network interfaces with MTU 1500. I'm not sure but I expect that GCE machines have Large Segment Offload active. These three factors combined make it so that if an application (such as Plastic) attempts to send a lot of data via TCP, some fragments of the TCP communications will get dropped. Net result: Plastic local server -> Plastic Cloud communications becomes unreliable. I have made things work in our case by manually forcing the MTU for all network interfaces related to the Docker containers to 1460: https://github.com/falldamagestudio/plastic-cloud-to-ucb/commit/91faf02257ea4d41531541d9000277d36f268668 Issue log: https://github.com/falldamagestudio/plastic-cloud-to-ucb/issues/15
  15. Plastic Cloud backup/restore policies?

    Thanks, appreciated.