Good blog-post about the problem of large-scale software simulations in science.
I'm obviously a fan of science-through-simulation so I think that the problems raised don't kill it. Or rather, I think that these issues are not a sign that something has "gone wrong" with science. They are an inevitable part of the maturing of simulation as a tool.
But clearly there's a unarguable need for the code to be available for reviewers. And ideally, to a wider community (hey! OPTIMAES) In fact, this is what I called "the dialogue of models", where competing models are presented and criticised as a way of refining everyone's understanding of the issues.
But what else can be done? Because, frankly, even when the code is out there, the number of people with sufficient understanding and time to analyse it, is going to be vanishingly small. And code is big and complex and time consuming. So the chance of it being "properly" peer-reviewed is low.
Higher-level languages make it easier to express more, more concisely. But they require abstractions are commensurately hard to unpack.
Another option is to use common toolkits like Repast where the task of debugging and verifying the underlying infrastructure is shared among many peers. Similarly, the data-sets need to be accepted and shared within the peer community. (There seems to be a list of climate data repositories here.)
But what else could be done?