In conjunction with the 27th International Conference on Software Engineering (ICSE 2005)
Every software application is built and deployed to accomplish some goal pursued by its interested parties. Thus, software engineers aim at designing, implementing, and maintaining valid applications that meet the needs of stakeholders. However, every application can be also potentially misused, that is, used to pursue goals that contrast the ones intended by stakeholders. Therefore, software engineers should try to design applications that, while still valid, are also trustworthy and cannot be misused. Validity and trustworthiness are goals that often cannot be achieved either because they are too costly or because they stem from conflicting needs. Historically, the software engineering community has strived more to obtain validity than trustiness. Nowadays, however, software ubiquity in the creation of critical infrastructures has risen the value of trustworthiness and new efforts should be dedicated to achieve it.
The major source of vulnerability of systems has been recognized to be poor-quality software. However, while secure applications are also valid and robust ones, security is a specific non-functional requirement that has to be explicitly and carefully taken into account during analysis, implementation, testing, and deployment. Moreover, some of the most successful techniques used by software engineers may conflict with security objectives. Abstraction, for example, is the invaluable device the designers use in order to cope with complexity, but, since it is rarely applied as a pure mathematical generalization, it could force one to neglect details that can be exploited to misuse an application; late binding, while a fundamental tool in pursuing design for change, could be hijacked to adapt systems to malicious goals; COTS, commercial off-the-shelf components, if they might foster the profitableness of software industry, they also introduce black-box subsystems that are difficult to manage when reasoning about the chain of trust of the whole system.
The security research community has proposed a number of techniques, such as cryptography protocols and tamper-resistant hardware, that could be used to build trust in software components, tools and process. However, this knowledge cannot be simply used to augment software engineers' toolboxes, since applications "decorated" by applying security features could only create a false sense of trustworthiness. Instead, most of the approaches should be re-thought under the light shed by the experience of security researchers in order to empower practitioners with novel techniques that are able to tackle the problem of building valid and trustworthy systems, while understanding associated costs and benefits. At the same time, several well-known software engineering disciplines such as verification, testing, program analysis, process support, configuration management, requirement engineering, etc. could contribute to improving security solutions that sometimes lack a coherent methodological approach or, as in the case of security standards proposed by the Common Criteria or BS7799, are challenging to be integrated with mainstream software engineering practice.
This workshop will provide a venue to discuss techniques that enable the building and validation of secure applications. We are especially interested in (1) design and implementation approaches that make it easier to deal with security requirements, and (2) program analysis techniques that enhance the trustworthiness of applications.
Areas of interest include, but are not limited to:
Workshop papers must be limited to 7 pages in the ICSE two column format.
Last modified: Wed Jan 26 17:26:33 CET 2005
$Id: sess05.html,v 1.4 2005/01/26 16:27:32 monga Exp $