Jump to content


  • Content count

  • Joined

  • Last visited

  • Days Won


psantosl last won the day on February 9

psantosl had the most liked content!

Community Reputation

31 Excellent

About psantosl

  • Rank
    Advanced Member

Contact Methods

  • Website URL

Profile Information

  • Gender
  • Interests
    plastic plastic plastic...
  1. Plastic Service cannot be started

    Hi, The license can't be loaded. That's the issue. Now, the thing is to find out how this server got there :-) There's nothing to do with registry keys or anything (we don't store any sensitive data in the registry), it can be related to the plasticd.lic file, or the license token if you are using one. Did you have a previously working license? If so, try to set it up temporarily. You can also try to skip using the "token" (for renewals) and just place a plasticd.lic file in the server binaries location (where the log files are). Hope it helps!
  2. Can you tell us a little bit more of how you ended up here? Are you getting this messages when you hit "checkin"? I just want to confirm I understand the issue.
  3. Nodata replication

    Hi, I'm about to write a blog post about this to better explain it. Meanwhile: 1) Does feature require both server and client to be updated to latest version? - YES. It requires a new API call to resolve objects in block. Otherwise it would be terribly slow.2) In ReleaseNotes you mentioned that update of workspace will cause data download - what pieces will be downloaded? - The ones that you don't have in your local repo. I mean, suppose you replicate a repo with --nodata. You won't have any data. Now you switch to a branch, all files will be taken from the original repo.3) Is there support in GUI for nodata and hydrate command? - Nodata will be there soon in pull. We didn't add a GUI for hydrate yet, and it is not yet planned. Not sure how high prio it is, I mean, it is a quite advanced feature.4) Does hydrate of last changeset of main branch allow to work further in local repo (creation of new branches, commits, workspace access)? - Yes, and if you don't hydrate, you will be able to work correctly too, create branches, new commits, everything.5) Does it mean that only data for latests revisions of each object be downloaded to local repo? - if you run hydrate, yet. If you don't: no data will be downloaded to the local repo (just to the workspaces).6) What will be the result of pushing back branch of which last changeset was hydrated to local repo and in local repo new changesets from tasks branches were added? - it will work fine. In case you push to a repo where some data can't be resolved, the push operation will fail, asking you to hydrate before. I mean, suppose I pull without data from main/scm1233@skull.codicefactory.com:9095. Then I make some changes on the branch (like it had 3 changesets and I add another 2). Then I try to push to plastic@cloud where the scm1233 doesn't exist => it will ask me to hydrate first. But if I just push to skull (where no data is missing), it will all go fine.7) What will happen if I try to merge my task branch to main branch which was replicated without data and there are conficts detected? - the update will download data from the right source to your workspace and the merge will go fine. Only issue if it can't download the data from the remote because there is no internet connection. In short: you can safely use the --nodata repo even without ever hydrating it. Hydrating is good to totally go offline. As I see it: nodata is great to have super light local repos, but still work connected to the central. I mean, you checkin locally, branch locally, etc, all the super fast ops you need. But you can still grab some data from central, and of course (this already existed) annotate from central. If you diff a cset whose data you don't have, it will be retrieved from central too. Cool, isn't it? :-)
  4. Access to Issue Tracker's Source

    Hi, You would like to see the client code of the JIRA integration, correct? The code is not anything fancy, so no issue sharing it, but it depends on some internals that would make it harder to release. But, do you know you can create your own easily? https://www.plasticscm.com/documentation/extensions/plastic-scm-version-control-task-and-issue-tracking-guide.shtml#WritingPlasticSCMcustomextensions I'm not sure if everything you need to do will be included in the interface we created for the extensions. Please check and we can talk about it here or even have an online session to discuss. Thanks, pablo
  5. Web UI for latest 6.x version

    Yes, we have 2 web interfaces now: * webadmin: server administration UI. It runs embedded with the server, so it is super easy to run. * webUI: user focused. To browse repos, diff, create code reviews. It is old, it is tougher to install. The goal is to port it to something similar to what we have with webadmin soon. (But I don't foresee we'll have it in the next 4-6 months yet). It is possible to use it in 6.0, yes.
  6. Update workspace with pending changes

    Hey, You set "merge options" but the message you are getting is about update. Do the following: * Go to pending changes. * Select your changed files and right click "apply local change" (this will mark the files as checked out). * Then go to the items view and click "update" => you should be allowed to move forward. Plastic is doing all this to prevent weird scenarios to happen while working on the same branch.
  7. Managing multiple projects

    Hi, You can use several ones, of course. It is just a matter of scripting "on repositories". Anyway, as the post you mention says, "on repositories" is only an issue when you have hundreds, which I don't know if is your case.
  8. Managing multiple projects

    While the discussion of whether you should go for a single repo or many is an interesting one (for docs one is fine), the answer to your question is easy: cm find "changesets where date >='2017/10/30' on repositories 'repo0@server:8084','repo1@server1:8084'" Or alternatively type this on a advanced query box in the GUIs find changesets where date >='2017/10/30' on repositories 'codice@codice@cloud','pnunit@codice@cloud' You can query together from not only different reps, but also different servers.
  9. Super interesting because we were just discussing this yesterday with @mig and @calbzam All you need to do is to launch an after-checkin server trigger. Details here: https://www.plasticscm.com/documentation/triggers/plastic-scm-version-control-triggers-guide.shtml#Checkin
  10. Custom merge tool

    Hey! Would love to see what you have developed :-) Can you share some screenshots or describe what it is about? :-) No, you don't need to worry about subtractive. For the external merge tool it is just like a regular merge... :-)
  11. checkout: Unexpected option --exclusive

    You are right, this flag shouldn't be in the help :-S We will remove it. This is from the old days when you were able to specify if you wanted to lock a file or not (but then all checkouts were handled on the server side, which was a scalability killer, 3.0 old days)
  12. Hi, The best solution for this problem is: 1. Split the project in two repositories. Repos in Plastic are light and easy to use. 2. Managers and QA: third repo with Xlinks to the other two. Problem solved. == Second solution == * Each team works in branches. * An integrator with access to the entire repo merges to main. == Third solution == * Teams work in branches. * The team is responsible to merge their own branches => just have to switch to main and merge from their branches. == Long explanation == Now, what can we do to solve a problem like the one you posted originally without having to ask you to split in 2 repos: the main issue here is merge tracking. === Merge tracking in Plastic === Merge tracking in Plastic is changeset based, not file based. This is similar to what Git does, but radically different than what Perforce does. What does it mean? When you merge a changeset, you merge all or nothing. You can't merge just a few files and left others for a future merge. It wasn't like that for years. Major versions 1, 2 and 3 had per-file merge tracking. It was more flexible (to some extent) but it had a major drawback: performance and complexity. With per-file merge tracking, each time you merge two branches, Plastic had to retrieve and walk the version tree of each file, finding the path between the two contributors involved in merge, calculating the common ancestor (graph walking, pondering multiple possible paths, etc) and finally launching the merge. It means that if you had 1000 files involved in merge, Plastic had to calculate the common ancestors 1000 times. It didn't scale. Later, around 2011, in version 4, we changed merge tracking to be changeset oriented. Since then, each time you merge you calculate the common ancestor just once walking the cset tree, independently of the number of files involved. No worries if you had 500k files involved in the merge, constant time, which was huge compared to 3.0 where you had to do ops for each of the 500k files involved, and that took forever (despite of the many optimizations we made). We lost some flexibility but we gained performance and simplicity, and then Plastic learned to handle much more complex cases well, like divergent moves, cyclic moves and many others. And, to be honest, all the flexibility provided by the old 3.0 wasn't really widely appreciated since it came with greater complexity on the user side too. === Ok, but what has all that to do with your path-based security? == Again, is all about the merge tracking. You are about to do a checkin, but someone else already checked in, so you need to solve a changeset level conflict. Most likely the files are not even in conflict at all, but the tree structure needs to be merged. You can see /unity but you can't see /unreal. Ok, but in order to complete the new "tree" you need to add changes from both places, even if you don't see them. Merge, at this point, is made on the client side, and there are lots of complex cases with directory moves that it has to handle. So, at this point, if part of the tree that needs to be "merged" (added, deleted, changes inside it, whatever) can't be loaded... the merge is aborted. == Enter Gluon == We created Gluon precisely to handle cases like this: you work on a single branch, you touch just certain parts of the tree, Gluon lets you load just those parts and checkin with ease. Gluon is created for artists in game development, and the 3 key principles we always considered during development were: I don't want to see a merge or a branch at all, I need to load just part of the repo and checkin correctly, I can't download the whole tree because it is huge. Gluon works single branch, no merges, but it can solve scenarios like the ones you mentioned. == The road ahead == It seems there is no hard limit to actually be able to merge changesets when their underlying trees do not collide. We don't do it today because it wasn't super high prio so far, and because it means some complex changes in the merge process that right now heavily relies on the client side to solve super complex directory move scenarios. But we could certainly add something to simply solve the conflicts, download the tree metadata (even without data because you can't see the full tree due to permission restrictions), and submit it so that the new tree can be created without loosing parts of its directory structure (right now the reason to avoid merge is that part of the tree will be missing on the client side, so there is no way to load the new metadata underneath. In short: we will eventually improve the ability to merge "subtrees" provided the final one is coherent, but we are not yet there. Now we need access to the full origin-of-the-merge tree to do the merge.
  13. something like Posh-Git

    This is the closest... and it is open for contribution https://github.com/powercode/PSPlastic
  14. Hi, I was about to answer but I need some clarification first: * Pull => you mean update, right? Pull in plastic involves multiple repos and pushing/pulling from one another. Correct? * Push => you mean checkin, right? (Again, push is about distributed operation). Thanks, pablo