English

29th Int. Conference on Software Engineering® 20 - 26 May 2007

Keynote Speakers

Wednesday 23 May @ 9:00AM
Venue: Salon D

Thursday May 24 @ 9:00AM
Venue: Salon D

Friday May 25 @ 9:00AM
Venue: Salon D

Steve Fisher

Salesforce.com

The Architecture of the Apex Platform, salesforce.com's Platform for Building On-Demand Applications

Deborah Johnson

University of Virginia

Computer Professional Ethics in Theory and in Practice

Bev Littlewood

City University London


Limits To Dependability Assurance - A Controversy Revisited

Steve Fisher
Senior Vice President of AppExchange,
Salesforce.com

Title
The Architecture of the Apex Platform, salesforce.com's Platform for Building On-Demand Applications

Abstract
On-demand computing has transformed enterprise software, lowering risk and cost while increasing user adoption and customer success. To be successful, an application must be designed for on-demand from the ground-up, including core architectural elements such as multi-tenancy, availability, performance, security, metadata-driven customization, integration via web services, etc. As with any new paradigm, initial applications must design and implement all these core attributes, but ultimately platforms emerge that encapsulate core computing services, allowing application developers to focus on innovation and value, and not on reinventing the wheel. With the Apex platform, salesforce.com has delivered the first on-demand platform, allowing developers to easily develop and deliver the next generation of on-demand applications. In this talk, Steve Fisher discusses the technical architecture of the Apex platform.

Biography
Steve Fisher is senior vice president of AppExchange at salesforce.com. In this role Fisher leads the team responsible for building the business for AppExchange, salesforce.com's Web-based platform for business applications. Fisher is also chairman of salesforce.com's Technology Architecture Committee, which defines and ensures the integrity of the architecture for the salesforce.com service. With more than 16 years in the technology industry, Fisher has held positions with Apple Computer and AT&T Labs, where he served on the team responsible for architecting AT&T's VoIP and utility computing strategies. Fisher also founded NotifyMe Networks, an interactive voice-alerting platform application service provider and served as the company's first CEO. He has been named an inventor on 14 patents. Fisher graduated with a BS degree in Mathematical and Computational Science and an MS in Computer Science from Stanford University.


Deborah G. Johnson
Olsson Professor of Applied Ethics & Department Chair Department of Science, Technology and Society
University of Virginia
http://onlineethics.org/bios/johnson.html

Title
Computer Professional Ethics in Theory and in Practice

Abstract
The starting place for professional ethics is with the idea that certain occupational groups have special expertise that leads to special responsibilities. The organization of the group into a profession with an organization that controls admission and promulgates a code of ethics is a mechanism for ensuring that the special expertise of members is deployed in ways that benefit the public (consumers, users, non-experts) or, at least, does not harm the public.
Professional ethics involves both issues and responsibilities that fall to the profession as a collective unit, and issues and responsibilities that are a matter of individual behavior. Codes of conduct straddle this distinction for they are a collective expression of standards for individual behavior. Codes of conduct are not, however, the be all and end all of professional ethics. Professions create a culture of responsible conduct, a culture that embraces values such as safety, reliability, elegance, etc. Professions create the culture of the profession through a variety of activities, including codes of conduct, accreditation standards for programs, ethics committees, hotlines, etc.
In the case of computing the arguments for a strongly differentiated, organized profession that takes responsibility for creating a culture that addresses the quality of computing products and services available to the public is compelling. Computing is a critical component of our society (and other information societies) and citizens and consumers do not, and cannot be expected to, understand the computing on which they depend for vital life functions. They have no choice but to trust computer professionals. The major question that computer scientists must ask, then, is whether the field is organized so as to be worthy of the trust of the public.
As an occupational group, computing has difficulty fitting itself into the paradigm of professions for several reasons. The field is diverse; loosely organized; and, there is a fuzzy relationship between academics and practitioners. Like many of the fields of engineering, computer science manages a tension between seeing itself as a profession and seeing itself as a group of individual agents working in the marketplace. The tension expresses itself in many forms. Computer scientists are evaluated by the criteria of computer science – by standards of quality, elegance, creativity –as well as by the criteria of the marketplace – what can reach the marketplace quickly and do the job adequately for the time being. The tension cannot be resolved; it must be acknowledged and managed. While strategies used by other professions can be adopted, computing poses special challenges that call for more than standard approaches.

Biography
Deborah G. Johnson is the Anne Shirley Carter Olsson Professor of Applied Ethics and Chair of the Department of Science, Technology, and Society in the School of Engineering and Applied Sciences of the University of Virginia. Professor Johnson received the John Barwise prize from the American Philosophical Association in 2004; the Sterling Olmsted Award from the Liberal Education Division of the American Society for Engineering Education in 2001; and the ACM SIGCAS Making a Difference Award in 2000.

Professor Johnson is the author/editor of four books: Computer Ethics (Prentice Hall, 1st edition 1984; second edition 1994; third edition, 2001); Computers, Ethics, and Social Values (co-edited with Helen Nissenbaum, Prentice Hall, 1995); Ethical Issues in Engineering (Prentice Hall, 1991); and Ethical Issues in the Use of Computers (co-edited with John Snapper, Wadsworth Publishing Co., 1985). Two new books are now in press. She has published over 50 papers in a variety of journals and edited volumes. She co-edits the journal, Ethics and Information Technology and co-edits a book series on Women, Gender, and Technology for University of Illinois Press.

Active in professional organizations, Professor Johnson has served as President of the Society for Philosophy and Technology, President of the International Society for Ethics and Information Technology (INSEIT), Treasurer of the ACM Special Interest Group on Computers and Society, and Chair of the American Philosophical Association Committee on Computers and Philosophy. Currently she serves on the Executive Board of the Association for Practical and Professional Ethics.


Bev Littlewood
Professor of Software Engineering,
Centre for Software Reliability, City University London,
http://www.csr.city.ac.uk/staff/littlewood/

Title
Limits To Dependability Assurance - A Controversy Revisited

Abstract
More than twenty years ago, as computers were introduced into safety-critical roles in civil aircraft, there was much debate about what claims could be made for their dependability. Much of the debate focused, naturally enough, on what could be claimed for the reliability of software. A famous example was the apparent need to claim a probability of failure of less than 10**-9 per hour for some flight-critical avionics. Several authors (I was one) demonstrated that such claims were several orders of magnitude beyond what could be supported with scientific rigour. In this talk I shall revisit this debate, showing some advances that have been made in 'dependability cases', particularly involving formal notions of 'confidence' in dependability claims. However, I shall also show that the bottom line has not changed significantly: although some systems have been shown to have extremely high dependability/ after the fact/ (i.e. in extensive operational use), it still remains impossible to show/ before using it/ that a system will be extremely dependable in operation. The reason is an unforgiving law about the extensiveness of evidence needed to make very strong dependability claims. These limits to assurance should be of interest beyond the technical community: for example, they pose difficult questions for society in estimating the risks associated with the deployment of certain novel systems.

Biography
Bev Littlewood has degrees in mathematics and statistics, and a PhD in statistics and computer science. He founded the Centre for Software Reliability at City University, London, in 1983 and was its Director from then until 2003. He is currently Professor of Software Engineering at City University.

Bev has worked for many years on problems associated with the modelling and evaluation of dependability of software-based systems, and has published many papers in international journals and conference proceedings and has edited several books. He is a member of IFIP Working Group 10.4 on Reliable Computing and Fault Tolerance, of the BCS Safety-Critical Systems Task Force, of the UK Computing Research Committee; from 1990 to 2005 he was a member of the UK Nuclear Safety Advisory Committee. He is currently serving his second term as Associate Editor of the IEEE Transactions on Software Engineering, and is on the editorial boards of several other international journals. He is a Fellow of the Royal Statistical Society.