I've recently been hired for a project that involves working with and around several third-party "enterprise" systems. Due to what I imagine would be the astronomical cost and effort required to build a sufficiently faithful replica of the production environment, the prospect of having a real development environment seems vanishingly slim.
This is of course not ideal. On the bright side, I imagine there must be people out there safely testing and deploying software into unreplicable environments like this, and I can probably follow in their footsteps.
How those who effectively deal with these kinds of situations do it?
This happens all the time in the real world. I know a guy who writes apps that control gigantic agricultural greenhouses - ventilation, heating, moisture control, you name it. He doesn't have a "test greenhouse", but he has a simulator program provided by the company that builds the actual hardware systems. If the code works correctly with the simulator, it is presumed to work correctly with the real equipment. On rare occasions the simulator turns out to be wrong, but that's the greenhouse-hardware company's issue to deal with, because it isn't simulating correctly.
These are situations were API documentation, interface control documents, and emulators are paramount. In a company I worked for previously, this actually happened this would happen frequently within a project during certain integration phases where one segment was ready, but others were behind, has another feature being worked on, or for some other reason they couldn't deploy the latest version of their segment to our test system. So, yes we did actually have a faithful replica of our production environment that we tested on; however, in practice all segments were never ready on schedule, but interfaces had been agreed upon and locked down prior to development starting, and emulators had been created that could for the most part mimic the other segments behavior.
As another answer stated the emulator is what will enable the testing to take place before deployment. A good emulator; however, depends on well defined interfaces and documentation.
I am in such situations all the time.
You surely does not need to interact with the entire application, but probably a few interfaces of some sort. Make sure you have confirmed and detailed documentation of the interfaces, then setup mocks of these interfaces only to verify that your added/changed code works the way you intended it to work.
You can also do a hybrid. Try to replicate the parts that you can rather easy do, then "connect" to the real systems (if this is possible in your situation). I have done so with some success - in some cases where my logic and the server software was run locally, but I still had a connections to the real ERP system to verify invocies etc. Not ideal, but things rarely are.
Given you have only a production system to work with - note that you cannot count only any development time saved on setting up a replica, but you have to take into account the business risk of using largely untested code with live business data. Your code WILL be less reliable than code tested against a replica. Can the systems be down for some time? Can they be restored in case of data corruption? How much does that cost?
A best practice in enterprises is to put up a replica (or maybe more than one) of the production at the moment the production environment is setup. At that moment, the additional cost won't be that huge.
Our system works with a number of large external systems. We combine the following approaches when testing them if we do not have a complete end-to-end setup:
Record-replay real data. Record real data (requests/responses from real external systems), parametrize it if necessary and replay
Build or buy a simulator that acts as an external system
DSL for test-data generation. For data-driven systems, write high-level DSL for generating test data.