Migrating repos to new server without full restore

Hi,

We’re migrating our repositories from an air-gapped, vendor-owned server to our brand-new, spanking, owned environment. What would be the best way to migrate the hosted repositories from the old system to the new. We have ssh access to both the old and new servers, but we cannot use the otherwise ideal approach of observing/importing the existing repositories, because the old server is air-gapped and can only be accessed through a local VPN tunnel.

I would prefer not to do a git --mirror and push, because a) there are quite a lot of repositories, and b) we had issues with some of the repositories when this was done in the original move to the current server.

The ideal would be to just tar the repositories, and copy them across to the new server. However, we would prefer not to do a full restore of the old server, because there is a large amount of crud in it created by the vendor that we would like to be rid off.

What would be the best/easiest way to handle this? If I copied the old repository files across, is there some way to “rebuild” the repository lists from the git files?

However, we would prefer not to do a full restore of the old server, because there is a large amount of crud in it created by the vendor that we would like to be rid off.

Do you mean that there are a lot of things on disk (like extra installed packages, modified configuration, etc) so you don’t want to do a full disk restore? Or do you mean that there are a lot of things in the Phabricator database (like old tasks, configuration, and user accounts) so you don’t want to restore Phabricator’s data (you’re going to start the new install from scratch)?

If it’s the former (you’re keeping the database data, want to tarball, don’t want to disk-image/disk-restore), tarballing will likely “just work”. “Tarball the whole repository directory” is the easiest/recommended way to recover from backups in the event of a loss of a machine. Unless you’ve configured repository clustering (which is somewhat unusual and advanced), you’ll end up in a state where the database says “repo X should be in directory /var/repo/X” for a bunch of repos, and those repos will actually be there after you expand the tarball, and everything will just work.

If it’s the latter (you’re wiping the Phabricator database completely) there’s no way to import Phabricator repositories from a directory. You could conceivably build this on top of diffusion.repository.edit API calls, and/or write an export/import pipeline by calling diffusion.repository.search on the old host to dump data and diffusion.repository.edit on the new host to create/import it.

Whether you use the API or manually create things, it’s fine to just put repositories in the correct state on disk, without pushing/mirroring them. Phabricator will figure things out as long as the data in the repositories matches up properly (e.g., /var/repo/1/ has the right repository data for repository ID 1). There’s no extra magical state or anything that you need to worry about.

(I can also help you with the import process, but it’s outside the realm of free support and probably a bit involved. This might make sense to formally engage this as a Support issue if you have hundreds or thousands of repositories. If you have like 20, manually recreating them is likely the pathway that makes the most sense, even if it’s not much fun.)

This all assumes the repositories are not clustered. If they are, and you aren’t deleting the data, https://secure.phabricator.com/T13393 has a general outline of how we handle this in the Phacility cluster.

The situation is the latter (shared phabricator instance, now needs to be divorce vendor data and especially users).

I was looking at bin/repository and wondering if some of the functionality there might enable this, which was why I asked. We’re not up to a thousand repositories (just enough to make this take a good long while), but it sounds like the way to go with this is to do the migration manually.

Thanks for the quick and detailed response.