Jump to content

Zenity

Members
  • Content count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Zenity

  • Rank
    Newbie
  1. Zenity

    How do I clean up local repos?

    Ok thanks, but how do I find out the correct IDs? Is there a way to figure out the ID after deleting the repository from the GUI?
  2. Zenity

    How do I clean up local repos?

    So I've been trying 6.0 with the Jet engine, and it nicely converted all of my repos. Some of them I forgot even existed, and they were pretty large so I deleted the repo objects from the GUI. The Jet storage folder didn't become any smaller though, so I am wondering how I actually free up the space? Also, since the repos were converted, I would like to free up the space from the old sqlite repositories. What would be the cleanest way of doing this? I see a bunch of large .sqlite files in the server folder, so do I just delete them or is there a more elegant way?
  3. Zenity

    Let's talk about file transfers

    Hi, thanks for the detailed response. To be clear, I am only talking about the simplest possible setup of a single cloud repository with distributed developers using a local sqlite database each (or Gluon). I believe that this is the most interesting setup for small development teams working on large Unreal projects, as it bridges the gap between the simplicity of something like Dropbox and enterprise-level version control setups. The repository was freshly set up and only had a single commit in a single branch, which is why it was particularly surprising that it was so slow. When the progress indicator switched to 99%, notably the download rate went down a lot (to about a third of what it was before, consistently) where it sat for about half an hour or so. I just assumed that it was doing something special near the end it did not expect to take long enough to dedicate a section for it in the progress wheel. Perhaps this was just a freak coincidence which caused the excessively slow download time during the last one percent, but that seems a bit odd. Edit: The size of the repository is about 23 GB. Had it been any larger, I would have been screwed. It already took skipping sleep to get to the office as early as possible and staying until late at night, which is why my tone was perhaps a little grumpy last night. The checkin finished eventually, it was just frustrating that it wasn't accounted for in the progress wheel (screwing up my plans to go home and eat since I expected it to be done at 100% ). When I observed the initial checkin on the other person's computer (over TeamViewer), I already saw how long it takes so I had a vague idea, and since that showed detailed progress, it made it doubly frustrating that I wasn't getting any indication now. The use case I had for which I had to cancel the download before was simply that I had to shut down the laptop to move between locations. With world wide distributed teams this can easily happen, and not all places have good or reliable internet connections. My client before had similar issues where he tried to checkin the project to the cloud server in Singapore (where he got a really slow connection, despite his massive broadband) but eventually his computer had to restart for updates. There are a number of reasons why it can be difficult for regular users to complete a really long operation. In the end it comes down to two specific use cases: 1) Like you suggested exactly right, the ability to pause an operation. Especially with the ability to shut down the application and still resume later on. This would cover a lot of cases already. 2) The ability to resume after a failure, like a power outage or network failure. Your response suggests that this already works in some situations but not in others (solid support for this with the cloud would be particularly important). Communicating this clearly in the UI and/or the basic user documentation is quite important as well IMO, so that the regular user can make informed decisions in difficult circumstances (like whether it's worth to even attempt an operation when the network is likely to be interrupted, or the computer has to be shut down before it can finish). Also thanks for letting me know about changing the database location. It would be great if you could add this as an option to the GUI, for the less technical users. Even if they should be using Gluon to begin with (for some reason my client wasn't able to make the initial checkin with Gluon, I'm still not sure what the issue was exactly but he was in contact with your support and in the end he was using the standard GUI). Speaking of which, do you have an official name for the "standard GUI"? One scenario I found myself in pretty frequently lately is that a small project is run (read bankrolled) by a very non-technical person who sets up the repository primarily for the benefit of hired developers, but also would like to keep their own copy updated and at most will occasionally check in some new content (like an asset pack bought from the Unreal marketplace). Gluon goes a long way in making those use cases more accessible, but there are still some complications like having to configure the repository to update all files which make me wonder if this couldn't be simplified even more. Some random ideas which come to mind would be a Dropbox-style read-only file sync from the main branch on the cloud server or a web interface to download and upload files directly (like Perforce is doing with their Helix Cloud service). I understand that this is all pretty unusual as far as the usual audience for SCM systems go, but this is exactly why I am so excited about Plastic. There is a big market gap which seems ripe for the taking. I'd love to be able to sum up Plastic to clients as "the Dropbox of version control", and it's really not far off! Meanwhile Perforce seems to be playing catchup with adding DVCS capabilities and working on a cloud service, but with their enterprise-focus and lack of agility, it still seems to me that Plastic is in the perfect position to disrupt that market.
  4. Zenity

    Let's talk about file transfers

    Case in point... today I tried to replicate a repo from a cloud that isn't very close to me (we tried at first on my cloud near me which however was way too slow for my client). I had cancelled the attempt earlier already because it took too long, so I went to the coworking space early and had the download running for the whole day. It looked like I was barely making it for most of the time. Then it suddenly slowed down extremely at 99% with no further input. Finally it switched to 100% and I thought I could go home, but now it's sitting there at "checking in" while my food at home is getting cold and I have no idea if this will even finish in time or whether it is save to cancel now. This is depressing, I haven't experienced anything this excessively slow before. The repo should be about 20-30 GB and it's a clean import. I have a good internet connection here but the connection to the data center was abysmal. The laptop got a high end desktop CPU and m2 SSDs. If this is already taking a full day, this just doesn't seem workable without partial downloads. Please tell me that there is a better way of doing things (or in the works). One thing I tried earlier was to replicate from one cloud to another, but it didn't let me. Did I just do something wrong is that not supported yet? Because right now that is the only idea I have to make this workable, although it's kind of too late for that by now. The most frustrating part of the whole experience was the lack of clear indications what it was doing and how much longer it would take.
  5. Zenity

    Let's talk about file transfers

    Heya! I'm loving Plastic so far and I'm really excited about its potential to disrupt the market for small distributed teams working on big ass projects, which is becoming really common with Unreal Engine 4. I consult a few clients and so far everybody has been happy to jump onto the Plastic bandwagon since the alternatives are really not all that great. There is just one thing that worries me a lot, so I'd like to find out if my concerns are justified and if so, whether this is something that could be improved upon soon. An important consideration when working with distributed teams online is efficient file transfers. I was excited to find that Plastic seems to do a lot to optimise this by sending files in bulk and showing useful progress indicators (sadly that's not completely common among other VCS...). But aside from that there seem to be no features to make large file transfers more bearable for remote developers, unless I am just missing it. When it comes to file transfer usability, I think there are three major levels: 1) A file transfer has to go through in one go, if it's interrupted you have to start from scratch. 2) The ability to resume aborted file transfers. 3) A system that analyses local files and re-uses existing blobs of data whenever possible. Now 3) would be an absolute killer feature for this kind of system, because a very common situation is that you have to clone a huge repository containing files you already have on disk. Seeing the entire thing being downloaded from scratch is just painful. I don't know if this would be technically possible, but since data is already bundled, why shouldn't it? Dropbox, Steam or Backblaze would be examples of tools using such a system. But anyway, that's the wishful thinking part. The concern part is that 2) does not seem to be supported either. Whenever I had to cancel a replication or large checkin/checkout so far, it seems that it started from scratch on the next try. If this is true, this is a huge issue because sometimes projects get so big, that it actually becomes difficult for remote developers to transfer it all in one go (and it's never pleasant to begin with). If there is a way to make this work, please let me know. If not, please let me know if this situation could be improved soon. Other VCS are notoriously bad at communicating how they handle resumed file transfers, but both Subversion and Git LFS seem to have at least basic capabilities not to download everything from scratch in case of a failure. Next, there's a little UI issue: When I start a long file transfer, I cannot use the Plastic GUI for any other work. Oddly I can simply open a second GUI and use this one instead, but it would be fantastic if this could be handled a bit more asynchronously (even if it has to block actions on this particular repository, it should allow me to use others at least). And last (for now ) a slightly related issue: Since local repositories are created in the installation folder by default, this can lead to the system drive blowing up quite unexpectedly. When you have a bunch of huge repositories and your system drive on a small SSD, that is a big problem. A solution of course is to install Plastic on a different drive, but this is hard to foresee before installing and even then it doesn't appear to be the most elegant solution. If there is a possibility to change the location of the local (sqlite) repositories I haven't found it, so it would be great if this could be made easily available from the GUI somehow.
×