Jump to content

ironbelly

Members
  • Content count

    46
  • Joined

  • Last visited

  • Days Won

    1

ironbelly last won the day on April 12

ironbelly had the most liked content!

Community Reputation

1 Neutral

About ironbelly

  • Rank
    Advanced Member

Recent Profile Visitors

351 profile views
  1. The object is currently locked. Try Later.

    Ran into this issue today myself and running cm listlocks came back with nothing and I confirmed that no one was checking in anything in at the time. Went onto the server into /opt/plasticscm/server and was not able to find a plastic.currentlocks as specified here: So what I ended up doing was going into my local .plastic folder and moving the plastic.lck file out to test.. Sure enough once that was done I was able to do a cm update . just fine
  2. Noticed that currently only Debian 8.1 is supported, is there a timeline for Debian 9 support and are there any risks that you are aware of with running plastic on a Debian 9.0 server?
  3. Ok, the problem was caused because we updated our SSL certificate and didn't update it on the plastic server. As soon as we recreated our PFX file everything got better. For anyone in the future coming to this thread here's a quick way to make a PFX file from your CRT/KEY files for plastic to use: https://www.ssl.com/how-to/create-a-pfx-p12-certificate-file-using-openssl/
  4. Nope, this didn't fix the issue, still persists.. digging into this massive error logs to see if I can find something
  5. client.conf file and Gluon

    gotcha, thanks for the reply
  6. This error started showing up this morning and no idea why. We have a yearly license and I can't seem to get this thing to auto-renew even though I go through the steps every month. Anyways, this month I can't seem to get the error to go away with the usual methods.. I have tried regenerating the token and running the configure server command, I have tried re-downloading the .lic file and replacing it and still seeing this error. I am upgrading the server to the latest version now and will update this thread if that fixed the issue
  7. Uploading spins forever and doesn't upload

    It is now at the point where commits will hang if any file in the commit is larger than 8MB. Block uploading 0 files forever Things I tried today: Deleting the workspace and all local files Creating new workspace Re-downloading repo Uninstalling Plastic Installing latest version
  8. Uploading spins forever and doesn't upload

    Another update, this issue happens in the main plastic client, not just with Gluon. I converted the workspace over and the same thing happens, it just hangs there too if too many files(more than 7 it appears) are selected for a checking. I enabled logging and this is what I see:
  9. Uploading spins forever and doesn't upload

    Following up again, there doesn't seem to be any rhyme or reason behind this and this problem doesn't pertain to specific files. The following screenshot is an example of a group of files I attempted to check in that would result in Gluon hanging indefinitely as described above: Now if I submitted the following, removing 08 from the list it worked fine: So I figured 08 was the problem, so I selected along with 4-5 other files and tried checking them in, and voila it worked perfectly. So 08 wasn't the problem it just wouldn't work when grouped up with those other files.
  10. Uploading spins forever and doesn't upload

    It only happens if certain files are selected mind you. For example I was able to check in the files above this problem file but as soon as I try to check it in, or include it with the others it hangs permanently: (Selecting no more than 3 files at a time I was able to commit in all of the files above this one but once I got to it it woudl say 'uploading 0 files' and hang there forever. This file doesn't appear to be different than the others, it is not read only, it is not open in another program. Ican't tell what is different about it
  11. I am trying to commit a 16 file 10.6MB commit and it just sits here forever, spinning and spinning forever.. I don't know if it's related but the server error logs show this: 2017-10-26 19:25:56,048 ERROR PlasticProto.ConnectionFromClient - conn 1435. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:25:56,050 ERROR PlasticProto.ConnectionFromClient - conn 1436. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:25:56,051 ERROR PlasticProto.ConnectionFromClient - conn 1437. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:26:56,052 ERROR PlasticProto.ConnectionFromClient - conn 1439. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:26:56,053 ERROR PlasticProto.ConnectionFromClient - conn 1440. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:26:56,054 ERROR PlasticProto.ConnectionFromClient - conn 1438. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:26:56,057 ERROR PlasticProto.ConnectionFromClient - conn 1441. Error sending a successful response for method [GetObjectsData]. Unable to write data to the transport connection: interrupted. 2017-10-26 19:26:56,061 ERROR PlasticProto.ConnectionFromClient - conn 1442. Error sending a successful response for method [GetObjectsData]. Cannot access a closed Stream.
  12. client.conf file and Gluon

    One of the most important features in Plastic to us is multi-threaded downloads, we set this in the client.conf file with <DownloadPoolSize>. Does this also affect Gluon, if we set this to 10, will Gluon also try to use 10 TCP threads ?
  13. Thanks for the thorough answer, I will be coming back and re-reading this and sending our admins here for some background info.. It is very interesting to know. Now let me complicate the situation even more. In the situation above: ROOT Unity Unreal 4 1 2 3 1 2 3 11 222 33 11 222 33 You notice that we have the 3rd layer of folders 1, 2 3 in the example. As we've grown we end up with separate teams working on each of those. So each of those represents a different and slightly independently developed, but still related, Unity project that fall under the large umbrella.. There are cases where assets will be shared between 1 and 3 but maybe not between 2 and 3, or sometimes between 4 and 12, etc etc, so it is important that they are kept together. Reading through your reply I thought that I might need to change the way I look at version control and be far more liberal with creating repos. They are light and easy as you mentioned so why not have a separate repo for each project but then I went through the list of projects in my head and wanted to come back here to voice my concern. In our current setup we have a Unity and Unreal version of all of our internal projects. We have 50-60 internal projects being worked on by 5-10 teams, meaning we would need 120 repos, all of which would get xlinked up. Overall the setup and maintenance of 120 repos, xlinking them for our team of 4-5 managers, adding and removing people, etc etc seems overwhelming when compared to the way we are doing things now The next way is that each project is it's own branch, but again with 120 branches, staying on top of and merging changes becomes a full time job and the chances of updates falling through the cracks goes up. At the end of the day from what I know about Gluon, that seems to be our ticket and not just for artists.. We have programmers who work on their own in their own corner of a project that would appreciate not going through all of the hoops described above. I will task our team with switching as many people over to gluon this week and if I have any more questions will come back.. Thanks again for the reply
  14. We have a repo that is divided up via 2 folder trees for Unity and Unreal something like this ROOT Unity Unreal 4 1 2 3 1 2 3 11 222 33 11 222 33 In this repo we have 2 groups of users - Unity and Unreal The Unity group only has read/write perms to the Unity side of the tree while the Unreal group only to the Unreal side of the tree. This is so our Unreal devs don't have to download 30GB of Unity content and vice versa. We have a single repo because our QA people and our managers need access to both sides of the tree to do their job and it is much more convenient for them to go to one repo, do one update and be able to push to one place. Here's the issue at hand: Dev in Group Unity pushes a bunch of updates Dev in Unreal group does a bunch of pulls, because he has no read access to the Unity side it updates him to the latest changeset but doesn't pull anything down(this is the desired behavior) Dev in Unreal group does a bunch of work and then goes to commit. While he was working a dev in the Unity group pushed 2 more changes Dev in Unreal group goes to push and is told he is out of date and has to do a merge before he can commit. Dev in Unreal group doesn't have read access to Unity side of things so he can't merge or do anything Only option we found to allow the Unreal dev to do anything was to cherry pick those 2 changesets for what he does have acess to and then merge This results in those 2 changes being erased essentially as he can only cherry pick what he can read My question is how do we overcome this challenge in this situation or any situation where one person might not have read access to somethingr that someone else commits but still needs to perform a merge to get their stuff in?
  15. yes this is correct. I was able to continue working on existing repos that had tabs open no problem, but the repo list stayed blank as it got stuck on the downed server when trying to update. In terms of how often we are listing repos, it might be once a week or so, however we onboard new staff regularly so for them it was impossible to get setup fresh on a new project because they couldn't create a workspace as they couldn't access any repos
×