I read with great interest a post by Margaret-Anne Story and Greg Wilson, entitled "How do practitioners perceive software engineering research".
Here are a few thoughts of my own:
I think this article hits a lot of nails right on the head. Good job. I consider myself a practitioner and a researcher. I worked for several years in industry, have always done research with industry and I try to run my development for research infrastructure using industrial best practices. Doing this, I see how incredibly hard it is, when you are time constrained, with so many pressures pulling you in different directions, to do anything other than muddle through! It's largely due to poor tools and lack of money to invest in making them better.
Here are some additional thoughts:
1. A lot of SE research really is of no current practical use, or of use only in small niches of practice. This includes formal methods, which may be great for safety critical systems, but not for anything else. So much of what researchers are developing in process, quality, testing etc. is also not of current use because even the basic techniques are not being well deployed: You can't expect more esoteric academic results to see the light of day when practitioners don't do the basics yet. The chasm between academia and industry gets wider as the research gets farther and farther away from the adoption.
2. The most incredibly valuable (to practitioners) tools and techniques have generally come out of industry or the open source community, sometimes with involvement of academics on the periphery. I am thinking of agile approaches, particularly test-driven development, Eclipse, new programming languages, etc.
3. Another category of tools and techniques just doesn't get off the ground because it requires a large investment by tool developers to make it work really well. Most tools are poor because not enough quality engineering is put into them, or are too expensive for most engineers to use, let alone academics. The main example I am thinking of here is model-driven development. It has so much promise, and indeed proof that it works is there (e.g. in the automotive industry), but the tools are either expensive and proprietary or are poor. Academics are blocked from being able to make big contributions because of the large amount of nuts and bolts development effort required (which we don't have the money for). Industry is generally interested in developing end-user products, not the tools that would help them (which would be 'overhead').
4. Research that may truly benefit practitioners often fails, at least at first, because peer-reviewers don't like it. If have often been told my research is not formal enough, not formally evaluated etc. So what if know my technique makes it easier to develop good software (as shown in small scale evaluations), the peer reviewers demand proof from industrial practice or a lot of time consuming formalization. Well I am never going to get that industrial practice or that postdoc for the formalization am I, if peer reviewers reject grants that would pay for it, or papers that would lead to grants being accepted?
5. Academics do contribute in a huge way: To educating the next generation of software engineers. We often disseminate our results in that manner. However too many academics, lacking industrial exposure, still continue to propagate long discredited concepts such as waterfall development, or promote formalism, process etc. as the be all and end all.
6. I look at my colleagues doing other types of engineering and note that they often can develop quality tools, often because their engineering problems are less 'wicked', or because they work in an area where there is lots of money for tools development.
7. Large infusions of government and industrial funding really do make a difference. I am thinking of what a difference CSER made to Canadian SE research between 1996 and 2006; that effect is still being felt.