There is a certain irony in the fact that the trustworthy computing forwards to a security and privacy flaws within Microsoft products. Microsoft, of course, has been the main proponent of pushing the concept of “Trustworthy Computing” in the information market that outlined the framework for developing a secure computing environment.
How Trustworthy Computing will be ultimately achieved has yet to be seen. While some vendors are developing their own proprietary solutions, many build their strategies based on the specifications provided by the industry standards body Trusted Computing Group (TCG). TCG’s membership includes nearly all the major proprietary companies in the computer industry including hardware developers like Intel and AMD, computer manufacturers Dell and Hewlett Packard, software companies like Microsoft and Symantec. More notable than TCG’s members are its non-members. Sans Sun Microsystems, there are no major open source-based companies on TCG’s membership list.
This paper critically analyzes current Trustworthy Computing initiatives, and considers Trustworthy Computing in open source space to show that the move towards Trustworthy Computing is a necessary, however lofty if not unreachable goal that may ultimately be nothing more than a scheme to improve public relations for proprietary computing companies.
1.1 History of Trustworthy Computing
Society has gone through a number of large technology shifts that have shaped the culture: the agrarian revolution, the invention of metalworking, the industrial revolution, the advent of electricity, telephony and television—and, of course, the microprocessor that made personal computing a reality. Each of these fundamentally transformed the way billions of people live, work, communicates, and are entertained.
Personal computing has so far only really been deployed against white-collar work problems in the developed world. (Larger computer systems have also revolutionized manufacturing processes.) However, the steady improvement in technology and lowering of costs means that personal computing technology will ultimately become a building block of everybody's home and working lives, not just those of white-collar professionals.
Progress in computing in the last quarter century is akin to the first few decades of electric power. Electricity was first adopted in the 1880s by small, labor-intensive businesses that could leverage the technology's fractional nature to increase manufacturing productivity (that is, a single power supply was able to power a variety of electric motors throughout a plant). In its infancy, electricity in the home was a costly luxury, used by high-income households largely for powering electric lights. There was also a good deal of uncertainty about the safety of electricity in general and appliances in particular. Electricity was associated with lightning, a lethal natural force, and there were no guarantees that sub-standard appliances wouldn't kill their owners.
Between 1900 and 1920 all that changed. Residents of cities and the fast-growing suburbs had increasing access to a range of energy technologies, and competition from gas and oil pushed down electricity prices. A growing number of electric-powered, labor-saving devices, such as vacuum cleaners and refrigerators, meant that households were increasingly dependent on electricity. Marketing campaigns by electricity companies and the emergence of standards marks (for example, Underwriters' Laboratories (UL) in the United States) allayed consumer fears. The technology was not wholly safe or reliable, but at some point in the first few years of the 20th century, it became safe and reliable enough.
In the computing space, we're not yet at that stage; we're still in the equivalent of electricity's 19th century industrial era. Computing has yet to touch and improve every facet of our lives—but it willA key step in getting computing to the point where people would be as happy to have a microprocessor in every device as they are relying on electricity will be achieving the same degree of relative trustworthiness. "Relative," because 100% trustworthiness will never be achieved by any technology—electric power supplies surge and fail, water and gas pipes rupture, telephone lines drop, aircraft crash, and so on.
2.1 WHY TRUST?
While many technologies that make use of computing have proven themselves extremely reliable and trustworthy—computers helped transport people to the moon and back, they control critical aircraft systems for millions of flights every year, and they move trillions of dollars around the globe daily—they generally haven't reached the point where people are willing to entrust them with their lives, implicitly or explicitly. Many people are reluctant to entrust today's computer systems with their personal information, such as financial and medical records, because they are increasingly concerned about the security and reliability of these systems, which they view as posing significant societal risk. If computing is to become truly ubiquitous—and fulfill the immense promise of technology—we will have to make the computing ecosystem sufficiently trustworthy that people don't worry about its fallibility or unreliability the way they do today.
Trust is a broad concept, and making something trustworthy requires a social infrastructure as well as solid engineering. All systems fail from time to time; the legal and commercial practices within which they're embedded can compensate for the fact that no technology will ever be perfect.
Hence this is not only a struggle to make software trustworthy; because computers have to some extent already lost people's trust, we will have to overcome a legacy of machines that fail, software that fails, and systems that fail. We will have to persuade people that the systems, the software, the services, the people, and the companies have all, collectively, achieved a new level of availability, dependability, and confidentiality. We will have to overcome the distrust that people now feel for computers.
The Trustworthy Computing Initiative is a label for a whole range of advances that have to be made for people to be as comfortable using devices powered by computers and software as they are today using a device that is powered by electricity. It may take us ten to fifteen years to get there, both as an industry and as a society.
Transitional, in which deprecated elements are allowed,
Frameset, in which mostly only frame related elements are allowed;
This is a "sea change" not only in the way we write and deliver software, but also in the way our society views computing generally. There are immediate problems to be solved, and fundamental open research questions. There are actions that individuals and companies can and should take, but there are also problems that can only be solved collectively by consortia, research communities, nations, and the world as a whole.
2.2 THE NEED FOR TRUSTWORTHY COMPUTING
Current statistics show the obvious need for a true Trustworthy Computing architecture. The United States Computer Emergency Readiness Team (CERT) cites that in June 2004 alone there were 56,034,751 reported incidents of intruders in various information systems. These incidents include the use of malicious code (i.e. viruses and worms) Denial of Service (Dos) attack, user/root compromise, etc.
The success of these attacks stemmed from the fact that these systems contained vulnerabilities – fundamental flaws in the system’s design, implementation, or configuration that attackers can recognize and exploit. The job of information security professionals includes mitigating the risks of these vulnerabilities by reducing or eliminating the probability of attack while limiting the impact of a successful attack by implementing various security measures. Each security measure implemented within a system detracts from the functionality of that system. Security managers have to balance security and functionality knowing that the only completely secure machine is one that has been completely removed from production. Trustworthy Computing promises to become information’s savior by eliminating the rift between functionality and security and allow a system to maximize both simultaneously.
The need for higher security in current systems is substantial; however, in order for computing to reach its full potential a new level of dependability, beyond merely compensating for known flaws, must be reached and successfully conveyed to the public. Trustworthy Computing is often associated with the potential of reaching a pervasive (a.k.a. ubiquitous) computing environment. Conceptually, pervasive computing refers to the eventuality that computers will enter into nearly every aspect of daily life with nearly everything around us (cars, tools, appliances, & other computing devices) equipped with some kind of embedded processing chip and networking capacity where it can send/receive data and react instantly to the environment.
Pervasive computing depends on the success of Trustworthy Computing. Computing devices have a legacy of failures from hardware, software, and malicious attacks that must be overcome in order for people to gain the proper faith in the reliability of computing systems to the point where they will allow them into every facet of their daily lives; a fundamental need for pervasive computing to come to term. Pervasive computing is considered to be full realization of computer technology’s potential, but before it can exist the consumer must comfortable allowing computers to penetrate every aspect of daily existence.
Since Trustworthy Computing has both technical and social criteria, it makes for a broad concept that not only requires advances in engineering, but also acceptance in society. A system deemed technically trustworthy means that it perpetually functions in the expected, designed manner while maintaining the integrity, confidentiality, and availability of the data within. The achievement of social trust relies upon the actualization of a consensus confidence from the users in system that it will perform as desired without losing or divulging any personal or otherwise sensitive data to unauthorized parties.
Microsoft recognized the need for Trustworthy Computing, knowing that it would take years for both technology and society to reach the point where it would become reality. Microsoft has considerable distance to cover in order for their products to be deemed trustworthy. A recent study done by MI2G states that, “recent global malware epidemics have primarily targeted the Windows computing environment and have not caused any significant economic damage to environments running Open Source including Linux, BSD and Mac OS X. When taking the economic damage from malware into account over the last twelve months, including the impact of MyDoom, NetSky, SoBig, Klez and Sasser, Windows has become the most breached computing environment in the world accounting for most of the productivity losses associated with malware - virus, worm and Trojan - proliferation”.
Microsoft’s control, however, encompasses only its own products, and since Trustworthy Computing requires all components of a system to achieve and maintain trustworthy status, Microsoft started the Trusted Computing Group with other industry leaders to use their combined market power to push their vision of Trustworthy Computing.