On 5/31/13 10:14 PM, Johnny Stenback wrote:
On 5/31/2013 12:32 AM, Mike Hommey wrote:
[...]
Option 1 is where I personally think it's worth investing effort. It
means we'd need to set up an atomic bidirectional bridge between hg and
git (which I'm told is doable, and there are even commercial solutions
for this out there that may solve this for us). Assuming we solve the
bridge problem one way or another, it would give us all the benefits
listed above, plus developer tool choice, and we could roll this out
incrementally w/o the need to change all of our infrastructure at once.
I.e. our roll out could look something like this:
1. create a read only, official mozilla-central git mirror
2. add support for pushing to try with git and see the results in tbpl
3. update tbpl to show git revisions in addition to hg revisions
4. move to project branches, then inbound, then m-c, release branches, etc
Another way to look at this would be to make the git repository the
real central source, and keep the mercurial branches as clones of it,
with hg-git (and hg-git supports pushing to git, too).
This would likely make it easier to support pushing to both, although
we'd need to ensure nobody pushes octopus merges in the git repo.
Yup, could be, and IMO the main point is that we'd have a lot of
flexibility here.
Option 2 is where this discussion started (in the Tuesday meeting a few
weeks ago,
https://wiki.mozilla.org/Platform/2013-05-07#Should_we_switch_from_hg_to_git.3F).
Since then I've had a number of conversations and have been convinced
that a wholesale change is the less attractive option. The cost of a
wholesale change will be *huge* on the infrastructure end, to a point
where we need to question whether the benefits are worth the cost. I
have also spoken with other large engineering orgs about git performance
limitations, one of which is doing the opposite switch, going from git
to hg.
I bet this is facebook. Their usecase includes millions of changesets
with millions of files (iirc, according to posts i've seen on the git
list).
I've promised not to mention names here, so I won't confirm nor deny...
but the folks I've been talking to mostly have a repo that's a good bit
less than a single order of magnitude larger than m-c, so a couple of
hundred k files, not millions. And given the file count trend in m-c
(see attached image for an approximation), that doesn't make me feel too
good about a wholesale switch given the work involved in doing so.
I wouldn't be surprised if depth of tree was an impacting perf, given
how git stores directories, with trees, tree children and refs.
I would think that 1000 files in the top level dir perform better than a
1000 files in 20 dirs depth.
Axel
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform