Jump to content

Matt

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Matt

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi Mikael, I appreciate the information. Cloud would be the way I'd lean in any case, since like you say, maintaining backups is already a pain in the ass, and not free. However, for us we will not switch (due to cost, will explain below). For anyone else in our situation, the experiment worked: I did try out Plastic with this path: 1) hg clone repo/path --all-largefiles (get a repo copy that has every single file) 2) hg lfconvert oldrepo newrepo (get a vanilla mercurial repo with all the big files) 3) Use the fast-export script linked above to generate a git repo 4) Use git fast-export to generate a binary blob (for us it was 72GB) 5) Use plastic fast import to import it It generally worked, however every tag got mapped to a branch (unclear if that happened in Git or only in Plastic, probably it already happened in Git since Git doesn't have tags like hg), and we had one filename with a unicode character that broke Plastic (it would fail on syncing that one file, not a big deal it was only a test, I fixed the name in hg again so in the future it should work unless we need to go back in time with that file), and lastly there were empty directories for every directory we ever had deleted, which was funny but it just took one script to delete empty directories and then do one commit to remove them. After that a diff of the Plastic workspace against the Mercurial repo showed no difference: awesome. My brief time with Plastic GUI was wonderful, it was fast. That is why I want to leave Mercurial: it is functional, but slow (also code quality in TortoiseHg is declining, probably Mercurial will be unusable one day, which is terrible since it is so good functionality-wise). However, ultimately, the cost doesn't work for us. We've been working on the game for 2-3 years, we make no money until we ship (hopefully!) and have no investment, so I realize we're not the target audience with the current pricing scheme. I'd switch us to p4 since its free for 5 people, but... its p4... so no Spending $62(6users+16-100GB) a month on repo hosting is a drop in the bucket if we had income! So when we ship, and hopefully it will make enough to at least fund our next work, we'll come knocking on the door ❤️ Thanks to everyone, I'll probably come back to this thread in year's time when I need to migrate hehe. Matt
  2. This is probably due to the fact my head is operating in a different way. I'm working on a Unity game for example, and will test things on Xbox and PC at the same time. To deploy to Xbox I need to reimport the whole project, etc. Disk space is cheaper than time, so I just have multiple versions of the project running simultaneously, even though history is shared (and I may not even branch commits if they are fixing overall project things I happen to find). There might be better ways of doing this in the long run, but for now its good to know its at least supported :) OK, last question for real! Is the server as stable on Linux as on Windows? I don't have much (re: any) running production C# services via Mono, and am concerned about their stability. I'd rather not spin up a Windows box to act as a server if possible. Thanks again, Matt
  3. Ok, a few more questions The repo / workspace model in Perforce speak would be closer to a depot / workspace model? So I could have one repo for the company, or for a project, and each user could have their own amount of workspaces per repo, correct? Is there a limit to workspaces per user? Does each workspace needs a full copy of the repo on disk? There isn't any symlinking/copy-on-write that may optimize space if I want 4-5 workspaces of the same repo on the same disk, but very small differences in each... That would be nice What are the RAM requirements of the Team Server? Thanks!
  4. Thank you to both, this helps with all the questions!
  5. Hi, I'm evaluating a switch to Plastic SCM Cloud, from our current Mercurial+Largefiles setup. My questions: First, is the only way to go from hg+lf to convert to a normal-non largefiles repo, then use the hg fast export (here: https://github.com/frej/fast-export) to git, then git fast export to fast import in PlasticSCM? Is this reasonable to do on Plastic SCM Cloud? Second, where are the Plastic SCM cloud servers hosted? We're a distributed team in Europe and having fast access is important (obviously it can't be perfect for everyone, but west coast USA for example wouldn't be good). Third, is there a limit to the cloud repository size? Does the $7.45/mo stay constant even upwards of 1TB? Fourth, this isn't obvious from the users page, does the Team license include the Jet backend? What happens if we go from 15 to 16 users? Do you automatically need to switch to the enterprise license? Thanks! Matt
×