Wall of Fire
One of the major problems with building a distributed system is that it’s distributed. This means that the parts of the system need to talk to each other. Of course, these days, networks are viewed by most large network operators (e.g. universities) as hostile environments, where anything even remotely risky is split out, preferably into its own little subnet.
These two facts make it very difficult to deploy wide-ranging distributed systems, because the components of the distributed application have to be designed with a degree of security in mind from the outset (which many are not), and because the corporate mind-set of network management is against the distribution of systems across the network – because these distributed systems forge connections between the firewalled subnets of the organisation.
If your distributed system is truly distributed (i.e. has no central point of failure), then one typically needs a significant degree of cross-talk amongst the components. This means that the number of firewall holes that need to be opened up increases as the square of the number of systems, increasing the management overhead, particularly when getting the system set up in the first place.
I’m not sure I have a solution for this, other than picking a very small set of standardised ports for the application to run on, and changing institutional policy to ensure that it’s easier to do this kind of deployment.