Linux Containers Authors: Liz McMillan, Jason Bloomberg, Elizabeth White, Anders Wallgren, Yeshim Deniz

Related Topics: Apache, Linux Containers

Apache: Article

Linux.SYS-CON.com "Blast from the Past" No.1: Paul Murphy on Apache & Plan 9

Linux.SYS-CON.com "Blast from the Past" No.1: Paul Murphy on Apache & Plan 9

How Apache & Plan 9 will defeat Microsoft's Passport:
Dominance of Apache means Microsoft won't have the votes to enforce its approach

(Linux.SYS-CON.com, 22 September 2003) ¡ª Linux gets a lot of press these days, but much of it appears condescending and is more about the phenomenon of its emergence and growth than it is about the value and use of the technology. That may be about to change, and for the better.

As a group, the so-called "mainstream press" often appears to favor Microsoft and show an appalling lack of technical depth in its enthusiastic repetition of the latest Microsoft press release. There¡¯s been a lot of speculation on why this is and whether it even happens. So far, no definitive research provides answers one way or the other.

I think there are two explanations. First, the press is largely the victim of a Microsoft marketing strategy that plays off of the social rules people learn in high school: Money and social skills define the in-crowd, and only nerds kvetch about the importance of better technology. Second, editors are part of a feedback loop, reflecting the views of their readers in the decisions they make on content and wording. Their perception of their market as a shallow Microsoft PC market therefore leads them to favor cheerleading over analysis because doing so sells more advertising, more subscriptions, and more advertised product.

What happens when journalists find themselves reporting the legal and personal risks associated with Microsoft¡¯s Passport can be easily avoided by adopting better technology from the open-source world? The normal legal standard for judging the adequacy of professional services ¡ª such as those involved in setting up an e-commerce site ¡ª is consistency with the "best" or industry-wide practices. What does the mainstream press do when that standard is largely set by the 66 percent of Web administrators who use Apache and open source? I don¡¯t know, but I think we¡¯re about to find out courtesy of Plan 9¡¯s pending victory over the X-files in the matter of single sign-ons and network authentication.

Single sign-on then & now

In the early days of Unix, resource access and authentication were not problems. Users signed on to the VAX and promptly maxed out the resources for which they were authorized. That worked well... until organizations got a second Unix machine in the door and people who needed two logins complained about the complexity of remembering two passwords.

Microsoft's consent decree
From: E-Legal: Microsoft Enters Into FTC Privacy Consent Decree, by Eric Sinrod (law.com)

"On Aug. 8, Microsoft entered a 20-year consent order with the Federal Trade Commission with respect to alleged failures of its Passport authentication service to protect the privacy and security of personal information. Hopefully the consent order will result in ensuring the security of personal information.
"Second, Microsoft is required to establish and maintain a comprehensive information security program in writing that is reasonably designed to protect the security of personally identifiable information."

None of the solutions to this problem were as good as they should have been. Kludges like NIS+ and FNS could be made to work for as long as the sysadmins wore their lucky underwear, but these fixes were never exactly stirring advertisements for the simplicity and user-focus of Unix-design ideas. rhost and the related facilities used with X11 were, on the other hand, both easy to use and wonderfully effective from a user perspective, but largely unacceptable to security-minded system managers.

Today, the single sign-on problem has escaped the back rooms to become a front-burner competitive issue. Microsoft¡¯s Passport service attempts to deliver a single sign-on solution for an essentially unlimited number of Windows users accessing Windows servers, while the Liberty Alliance tries to play catch-up from the competitive side of the systems world.

Passport brilliantly combines the kludgey and unstable nature of NIS+ with the insecurity of the trusted hosts concept to produce a nine-step process with obvious opportunities (See, for example, Risks of the Passport Single Signon Protocol) for security and other abuses:

  1. A Passport user requests a secure page from a Passport partner
  2. The Passport partner redirects the page request back to the user¡¯s browser
  3. Which redirects it back to its authorized Passport server
  4. Which uses a three-step challenge/response approach to authenticating pre-registered users
  5. After which it redirects the now authenticated user back to the Passport partner site
  6. Which instructs the user¡¯s browser to write an authentication cookie to the user¡¯s PC
  7. Whose presence then authenticates that PC to other Passport partner sites.

You might assume Rube Goldberg concocted the design. In reality, things got this way through an evolutionary process that, in retrospect, looks as logical and inevitable as a slow-motion train-wreck in a B movie.

The approach favored by the Liberty Alliance is markedly better, with fewer steps and no monopoly on holding or managing authentication information. That said, it only looks good when you compare it to Passport. Consider it independently and you might feel a touch of nostalgia for the days when federated naming-services provided the backbone for federated identity-services.

What¡¯s going to become of the conflict between the empire¡¯s Passporters and the heroic rebels of the Liberty Alliance? I think Sun¡¯s going to make both sides of this debate obsolete by introducing some old technology that solves the problem elegantly.

Although Sun is the company that popularized the notion "the network is the computer," Sun has not yet made this as transparent as it should be. Sun developed NFS as part of its first-generation OS, and this helped a lot (particularly within trusted communities), but it also required central control of shared resources. Solaris 2.0 focused more on adding scalability and reliability than on extending Unix out of the box and across the network. Solaris 2.9, the current release, contains many single-identity tools, but they¡¯re all add-ons to the basic OS rather than being truly integrated with it.

I think that Solaris 3.0 will change all that by adopting a bunch of user- and resource-authentication ideas from Lucent Technologies' Bell Lab¡¯s Plan 9. More importantly, I think that Sun¡¯s actions will give those ideas, already available to the open source community, new impetus and lead to a battle for Webshare as Linux and BSD Apache administrators decide between joining up with Microsoft and the X-files or taking off for outer space and Plan 9.

XML's roots go back to 1957

Like Passport, Dot.net is distantly based on a family of extensions to XML. I think of these extensions as the X-files; even Bishop Occam would want to blame vast government cover-ups of alien takeovers to explain the weird stuff we encounter in search of an explanation for Passport and dot.net ideas down the XML rabbit hole.

XML started out as a sort of simplified SGML (Structured General Markup Language, a 1983 ANSI standard) and originally inherited many of SGML¡¯s key characteristics.

SGML defines how document-markup should be structured and unifies related ideas from both the printing and computing perspectives. On the editorial (or printing) side, SGML got its start the day after Gutenberg¡¯s invention of movable type made it necessary to formalize editorial instructions to typesetters. From this perspective, SGML¡¯s tags were instructional in nature, as in "start using 42 lines per page here".

An exonym is a group or geographical label applied by outsiders to a group or region but which the inhabitants or members of the group do not themselves use. Xlang is Microsoft¡¯s exonym for a bunch of extensions to XML which have the effect of turning it into a two-way communications infrastructure for Web programming.

Microsoft¡¯s John Montgomery makes a big deal out of this in an interview reported on devx.com: We see XML as being kind of the next level of technology that¡¯s going to come along and provide a universal description format for data on the Web, and what this is going to enable, from our perspective, is Web programmability.

One context in which to view this use of Xlang as a Web-based messaging interface is Chris Paget¡¯s recent demonstration of fundamental ¡ª and probably non-repairable ¡ª security defects in Windows¡¯ internal-messaging interface.

On the computing-practices side, SGML¡¯s roots only go back to about 1957. It was in this year that Rand Corp. made its first attempts to implement the COLEX text retrieval system, a development that led to the 1967 commercial release of SDC Dialog (probably (?) the first public-network-based information-service). COLEX was aimed at helping the U.S. Air Force sort through hundreds of thousands ¡ª maybe even millions ¡ª of technical documents, and it needed some way to differentiate text by type. As a result, COLEX tags were descriptive as in: TITLE: some title text :END_TITLE.

A third type of tag, combining formatting information with procedural information, was pioneered in early '60s MIT products like RUNOFF (which begat troff and ditroff). These tags were intentionally eschewed by the committee because SGML was intended to describe document markup, not document processing.

The SGML specification defines two types of information labeling:

  1. data identification
  2. presentation formatting
It does not say anything about data processing; for that you need an application that can interpret and act on SGML markup. That interpreter, in turn, has to drive some kind of output application that puts ink on paper or pixels on screens.

Consequently, the rigid separation of markup information from procedural information means that actual use of SGML needs three things:

  1. You need to define what your tags are, what actions they translate to, and to what degree, if any, they can be nested. That set of definitions then constitutes the SGML document type to be produced when a document marked up using those tags is processed for formatting and is called, logically enough, a document type definition (DTD)
  2. An application that can interpret the markup and combine it with the document itself to produce output suitable for use as input to a rendering engine
  3. A graphics-output or rendering engine to produce the printed or displayed document.

I¡¯m still confused but...
In this context some readers may recall my problems a few weeks ago differentiating NextStep¡¯s use of PostScript and the use of PDF within MacOS X. PostScript is a procedural programming language; PDF combines markup with document content and shares the PostScript page imaging model and vocabulary.

NeXTStep used PostScript in both roles: as markup information in files and to process the resulting combined files for screen or print output. That, of course, works extremely well and provides such clean and consistent output that habituation to its use tends to blind the user to problems with other, less functional, display methodologies.

In Robert Calliau¡¯s introduction to the Lie and Boss book Cascading Style sheets: Designing for the Web (2nd Edition, Addison Wesley, 1999), he discusses Tim Berners-Lee¡¯s work on developing HTML and points out that this took place on NeXT machines with PostScript-based displays. Calliau makes a comment about stylesheets becoming programming languages that qualifies as prescient in the context of what¡¯s been happening with XML recently. In the young Web there were no more pagination faults, no more footnotes, no silly word breaks, no fidgeting the text to gain that extra line you sorely needed to fit everything on one page. In the window of a Web page on the NeXTStep system the text was always clean. (...)

Then we descended into the dark ages because the Web exploded into a community that had no idea such freedom was possible but worried about putting on the remote screen exactly what they thought their information should look like. (...)

Fortunately, SGML¡¯s philosophy allows us to separate structure from presentation and the Web DTD, HTML, is no exception, Even in the NeXTStep version of 1990, Tim Berners-Lee provided for style sheets. (...)

I¡¯ve always had one concern: is it possible to create a powerful enough style sheet "language" without ending up with a programming language?

The important thing here is that all of this is non-procedural: the markup tells the rendering engine what to do but not how to do it. In fact, the original ANSI committee made a special point of not including another computing tradition ¡ª that of fully integrated markup and processing languages like TROFF/TMac or the later LaTeX.

In general, the document-preparation workflow envisaged in SGML is:

  1. Someone loads or creates the document source text
  2. Someone adds formatting and presentation information using a DTD (markup language) like HTML
  3. The completed document is stored
  4. On request, the markup language is interpreted by a transformer application which outputs graphics commands for a rendering engine
  5. The rendering application interprets the graphics commands to create the user readable output on screen or paper.

Notice again that the only executables here are the transformer and rendering applications. The markup language is interpreted by the transformer and rendered by the graphics engine, but the markup language does not itself take on the attributes of a programming language and does not contain executable code.

How well this works in terms of final product quality depends in large part on the quality with which the output is rendered, something which itself depends on both the rendering application and the physical technology used.

The HTML DTD does not offer much direct formatting control; an HTML page displayed using IE on a PC with default fonts, borders, and window sizes will look very different than that same page displayed under Konqueror. What¡¯s going on is that each browser has what amounts to an internal stylesheet that determines how text marked up with a format label like <EM> is actually rendered in the local graphics environment.

Cascading stylesheets bring better control where the page meets the PC screen by providing explicit rendering instructions to replace these default choices. For example, the browser default is to show something tagged <H1> somewhat more than three font sizes bigger, but in the same color as, something tagged <P> but

<STYLE TYPE=text/css>
H1 { color:blue }

over-rides the default stylesheet to add the instruction that text presented between <H1> tags should also be rendered in blue.

Since a document can contain more than one set of rules either directly or by reference, some complexities arise in deciding which rules apply. In the official CSS specification, those inheritance rules are executed by sorting through presentation rules to find the nearest one not overridden by an "important" label attached to an instruction in a higher level stylesheet. This is a strategy roughly analogous to letting the person whose shouts sound loudest win the argument.

Graphically, this process can be presented as an inverted tree with formatting authority cascading down it to the lowest applicable level; hence, eventually, some more X-files: including Xpaths, Xlinks, Xschemas (done with the eXensible Stylesheet Definition Language [XSDL] or just .XSD in DOS), and, more recently, XMLNS or XML name-space files.

When work started in 1996 on yet another SGML DTD, to be known as XML, the need for stylesheets was a well-established part of commercial reality. Two additional standards, often grouped together under the name XSLFO (Extensible Stylesheet Language, Format Objects) and reasonably considered generalizations of the stylesheet concept, were co-developed with the XML specification to accommodate this.

The latter of these control how XML documents are transformed to produce documents that can be rendered by standard engines such as browsers:

XML Document ¡ú Transformer
acts on XSL RULES
¡ú HTML document

Defining an XML DTD

In defining an XML DTD, you create and then tag the tags. i.e., you:
  1. Define the label tags that will be used to label content in documents of this type; and,
  2. Then tag those tags with presentation information to control how that content will be presented.

In use, this produces at least an XML document containing the labels, an XMLNS (XML Name Space) document containing the definitions, and an XSL document containing the presentation information for use by the output formatter.

This set of solutions met the needs of large numbers of people for controlled document structure and presentation. As a result a number of XML DTDs were quickly standardized, including one I¡¯ve been working with, the XBRL specification for an extensible business reporting language and one many people have been working with, Microsoft¡¯s XML definitions for files produced by Microsoft Office.

Except, excuse, or excommunicate?
A report by Alex Gantman on the Neohapsis security track suggests that digitally signed Microsoft Office documents can be tampered with easily: I have stumbled onto a potential security issue in Microsoft Word. In both cases, the adversary (mis)uses fields to perpetrate the attack. It¡¯s important to note that fields are not macros and, as far as I know, cannot be disabled by the user.

Because an INCLUDETEXT statement is part of the hash calculation but what it fetches isn¡¯t (that happens at read time), he suggests you can change the included text after doing the encryption without affecting the apparent integrity of the digital signature.

One of the side effects of Microsoft¡¯s 1998 decision to embrace XML was its immediate extension to provide access to procedural elements. Starting with ActiveX this has expanded to include various document, or common, object models (DOM/COM) and, most recently, SOAP. The Simple Object Access Protocol was originally intended to provide RPC like services that bypass firewalls by using port 80 with HTTP but is now being extended, via the Web services definition language (WSDL) to allow for more general forms of communications.

By taking XML across the gap from markup to procedural language, Microsoft made file interchange and information use both easier for Windows developers and more dangerous for users. After all, an XML file is still just a text file that anyone can edit whether it contains procedural information or not.

For example, the extensions made the following possible in an XML document:

<![CDATA[ Virus=new ActiveXObject("WScript.Shell");
Virus.Run("%systemroot%\\SYSTEM32\\CMD.EXE /C DIR C:\ps");]]>
(Note: this example for Microsoft Excel is from http://www.guninski.com/ex$el2.html except that "virus" is not spelled out in the original. This code apparently works with the more recent MSXML4/5.DLL parsers for the major current (mid July, 2002) releases of Windows 2000 and Windows/XP. Also see securitytracker.com¡¯s description of an MSXML.dll exploit with respect to SQL-Server 2000 that can allow the execution of arbitrary code by a remote attacker.)

Encryption to the rescue!

Obviously, that raises a problem. Let's say someone sends you a PowerPoint document saved as XML. Should you load it? Delete it? Read the XML file looking for external executable references?

More generally, how do you know:

  1. That a document you receive from a sender has not been changed by someone else? or,
  2. That the sender will neither deny having sent the document nor claim that you, or anyone else, could have modified it en-route?

The technology needed to assure a document recipient that it originates with the ostensible sender and has not been tampered with uses the XML digital signature and encryption standards. These describe how encryption can be used to authenticate documents by defining what is enciphered, how that is done, and how the results are represented in an XML document.

If it quacks like a duck, walks like duck, and looks like a duck, should it have teeth?
The Trusted Computing Platform Alliance (TCPA) looks a lot like an open specification process at work generating the consensual basis for Microsoft¡¯s Palladium infrastructure. The TCA folks, whose Web site is not fully accessible to users of Netscape 4.76 on Solaris and who don¡¯t allow just anybody to see past the "Organization" header on their front page, carry the digital signature idea forward into hardware and have produced an interesting, if somewhat frightening, 332-page main specification whose implementation would render Passport¡¯s cookies obsolete.

If you do a Google search using just "palladium" you don¡¯t find a lot of positive commentary. On the other hand, it could just be the hardware complement needed to enforce new terms that seem to be entering Windows end user licensing. For example, recent Windows XP Service Pack 1 and Windows 2000 Service Pack 3 licenses state that: "You acknowledge and agree that Microsoft may automatically check the version of the OS Product and/or its components that you are utilizing and may provide upgrades or fixes to the OS Product that will be automatically down loaded to your computer,"

One of those components is, I imagine, illustrated in the (made-up) XML excerpt below:

<?xml version="1.0"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
<registration description="E2KXPSP2" progid="XP803AC54C" version="1.09"
<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
<LocationOfComponents HRef="file:///H:\2KApps\MSOffice%20XP'/>

Using XML, without or without hardware encoded keys, to enforce licensing might make sense in both the Microsoft and DCMA contexts (although active copy protection built into the distribution medium would be smarter) but the direct threat to Linux here is that a Windows user who needs to interoperate with a user of open source software may become unable to do so because the XML registration tags written by OpenOffice.org won¡¯t check out with Passport.

Specifically, the encipherment is handled via PKI ¡ª Public Key Infrastructure on the RSA model. The underlying encryption methodology is clearly explained by Ed Simon, Paul Madsen, and Carlisle Adams in their An Introduction to XML Digital Signatures on the XML.com site.

The key point is that two separate keys are used such that, as Simon et al put it, "a cryptographic transformation encoded with one key can only be reversed with the other."

These keys are related via a hypothetical mathematical construct known as a one-way function. In these, the computational cost of creating two keys is trivial but the computational cost of finding the second key from knowledge of the first is thought to be very high. Thus a PKI user can publish one key while keeping the other secret, thereby creating a situation in which the ability to decrypt something with the public key asserts that it was encrypted with the private key and, by extension, can only be the work of the only holder of that private key. This therefore ensures that the sender cannot repudiate the encrypted data and so amounts to a digital signature.

Similarly, a sender can use the recipient¡¯s public key to encrypt data knowing that only the recipient¡¯s private key can be used to decrypt it, thereby ensuring the privacy of the message. With PKI, senders and receivers can exchange public keys and so enter into a secure, signed, exchange. Neither side can know, however, who the other is unless some third party previously attests to both identities. As a result various certification authorities have evolved on the Web to certify that the identities involved are as represented and, at the cost of an additional pair of PKI encoded digital information transactions, both sender and receiver can be reasonably assured of the other¡¯s real identity.

The normal method for validating a digital document¡¯s internal integrity is to record a hash value (a mathematical or heuristic representation of all, or part, of the document in a single, usually short, string of text or a single number) and then to encrypt and transmit that hash value as a "digital digest." Recalculation of the hash value on document receipt and its comparison with the decrypted value is then expected to show whether the document has been tampered with because a content change will result in computation of a different hash, or digital digest, value.

The combination of XML with embedded digital signatures allows information suppliers to assure their customers that the documents they get are authentic and unmodified by third parties. In other words, if that hypothetical PowerPoint document contained a digital signature, and you had the software to verify it, and both the hash and key match checked out, then you would be certain that the document came from the ostensible sender and had not been modified en-route. Equally importantly, if it turned out to contain a virus, the sender would be forced to acknowledge responsibility for that since you could prove who it came from and that the problem existed when the sender affixed the digital signature and thus before the document was sent to you.

These ideas can, of course, be applied in other ways to other problems. It¡¯s not a long step, for example, from using digital signature standards to authenticate documents for both sender and content to applying the same ideas to authenticate almost any kind of information exchange, including that needed for a single sign-on system, for remote procedure call authentication, or in XML documents that distribute a user¡¯s credit balance or other personal details to third parties.

Liberty Alliance

On the open side of the ledger, these ideas pour directly into the XNS (eXtensible Name Service) attempt to specify a vendor-neutral digital identity infrastructure. On the proprietary side, however, they underlie what became Microsoft¡¯s Passport Service.

One of the responses to Passport is known as the Liberty Alliance a Sun-inspired effort to produce a genuinely open and interoperable single sign-on and authentication standard.

The Liberty Alliance released its 1.0 Federated Network Identification and Authorization specification on July 11th 2002. In many ways, this is both a simplification and a generalization of the ideas behind Passport but without the proprietary overtones and single point of control characterizing the Microsoft solution.

One of the most interesting things about this specification is its use of SAML (Security Assertion Markup language) to define and control the messaging structures used in an actual implementation of the specification. Full details, including protocols and the SAML schemas needed, are available at http://www.projectliberty.org/ but, basically, the liberty specification handles authorization in a three-stage process with all communications structured via SAML and flowing through the user¡¯s browser or other software agent.

Microsoft, which had previously announced planned upgrades in its Kerberos derived security for Passport also announced, on July 16th, 2002, its intention to embrace SAML.

As currently defined, however, SAML is faithful to the distinctions that went into the SGML specification back in 1983 and therefore does not include procedural elements. Thus, SAML is used in the liberty specification as a technology agnostic way of conveying assurance information between two or more procedural applications which, respectively, produce and consume the information.

If Microsoft were serious about its support for SAML as a standard it could, of course, adopt the Liberty Alliance single sign-on and authentication specification for its own use and let Passport, and such extensions of XML as its use to bypass firewalls for "Web programming" and remote services execution, die of their own weight.

Enter from stage left, Plan 9!

That may not strike you as likely any time soon, but stranger things have happened, including the growth of a significant following for Plan 9, the movie, and its eponymous influence on the Liberty Alliance specification.

When Ed Wood Jr. put his ideas about film making into Plan 9 from Outer Space the result became a cult classic defining an entire genre of B movies. When Rob Pike and his colleagues at AT&T Bell Labs first defined the Plan 9 operating system many of their ideas seemed to be from outer space. The linkage to the Dantesque horrors of Plan 9 led to such an inexhaustible source of bad puns and in jokes that I half expect to see Sun release SunOS 3.0 on a Good Friday.

In operation Plan 9 looks a lot like Unix but it is quite different internally in that the original design took many Unix ideas for single machine environments and re-thought them for fully distributed, multiple-machine environments. Key among these is the link between user and machine. In Unix, a user authorization is defined fundamentally with respect to the resources available on a specific machine. In Plan 9, user authorizations are defined for a distributed virtual machine consisting of many physical machines.

Thus, Plan 9 user services present as hierarchical file systems and the machines a user accesses exchange individuality for function. A user may, that is, access a service such as program execution on a CPU server without needing to know anything about that machine in terms of where it is, who owns it, what kind of CPU(s) it has, or what other resources may be available to it locally.

Within the current releases of Plan 9 the core user authentication functions are handled by an agent called factotum that handles all security interactions on the user¡¯s behalf: instruct factotum on the clearances you have and any service requiring authentication can query it, instead of you, to determine what to do about the request.

Technically, this has the enormous benefit of taking the entire cryptographic exchange burden out of the hands of both users and application designers. Managerially, it completely avoids most of the complexities associated with supporting large numbers of users in many-owner, single sign-on environments.

Showing off, a Friulian slip?
The first implementations relied on Gnots for display purposes. The Gnot was a true smart display running the 8? window manager. It was a custom-built, MC68020-based terminal with a big screen and powerful bit-mapped graphics intended to rigorously separate display from file or CPU functions.

In its simplest form, you authenticate at the local level, instruct factotum on dealing with authentication queries, and it works with the network operating system and factotum aware applications to automatically recognize that authentication anywhere the system operates.

Given that the factotum implementation is rigorously based on a mathematical representation of the authentication problem, can use multiple encryption methods independently, and is operationally quite simple, it is likely to be extremely difficult to subvert.

Factotum is considerably simpler in concept and more robust in implementation than the protocol and strategy produced by the Liberty Alliance. The two do, however, bear a vague relationship. It is perhaps what you¡¯d get if some members of the committee putting together the liberty protocol looked at Passport to see what errors to avoid while others remembered reading about Plan 9¡¯s authentication solution. Read the Liberty documentation carefully, recognize that the alliance specification has to work over a much larger and more complex set of ownership, control, and technical interactions, and the two sets of ideas start to look like cousins.

Commercial realities?
A relative, the Andrew file system from Carnegie Mellon is in widespread use.

Another closely related technology, that of the plugable authentication module, has long been available on Linux.

The Kerberos reference page provides links to information about the original MIT/CMU product.

One of the links between them is in an alliance specification concept called "circles of trust" that would map rather well to global or enterprise-wide Plan 9 implementations if those existed in commercial reality. In such an environment, the network really would be the computer ¡ª and that¡¯s a key reason I expect Solaris 3.0 to incorporate this functionality as it absorbs more and more of the Plan 9 idea set to deliver truly distributed access to computing resources. If it happens, that will provide a powerful commercial reason for people to adopt these ideas and thus create additional impetus behind their adoption in the general open source world.

If so, we¡¯ll have a very clear cut battle for market dominance between Microsoft and open source ideas on single sign-on:

  • The X-files pile complexity on improbable foundations to derive Passport from SGML, while
  • Plan 9 represents the evolution of Unix through simplification and the re-thinking of very basic design ideas.

In the old days of Windows dominance, the outcome would have been a no-brainer. When Microsoft pointed its checkbook, people surrendered. However, that world is rapidly going away. Now ordinary users mutter about security, governments look at Windows-brand products as national security risks, and lawyers gear up to feast at the tort table as the courts start to enforce our liability for losses to clients whose information we abuse.

With factotum in play, legally "accepted industry practices" may soon no longer include Passport and trying to make a quick change back to Firefly¡¯s original ideas or the MSN wallet solutions will probably just make it all of this worse for Microsoft. Why? because "accepted industry practices" in an unregulated industry are set by a kind of majority vote. The dominance of the Apache toolset means that Microsoft won¡¯t have the votes to enforce its approach over technologists¡¯ objections or in spite of customers¡¯ legal risks.

Open source, on the other hand, continues to gain ground as part of the solution. A rapid public win by factotum over Passport may be enough to flip attitudes in the mainstream press from near automatic approval of Microsoft press releases to due cynicism.

More Stories By Apache News Desk

Apache News Desk trawls the world's news information sources and brings you timely updates on the Apache Software Foundation community of open-source software projects, Ant, Beehive, Cocoon, Harmony, Jakarta, Maven, and Tomcat.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, will discuss the importance of WebRTC and how it enables companies to fo...
Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations.
SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device. For more information, please visit https://www.mangoapps.com/.
SYS-CON Events announced today TechTarget has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget is the Web’s leading destination for serious technology buyers researching and making enterprise technology decisions. Its extensive global networ...
SYS-CON Events announced today that EastBanc Technologies will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. EastBanc Technologies has been working at the frontier of technology since 1999. Today, the firm provides full-lifecycle software development delivering flexible technology solutions that seamlessly integrate with existing systems – whether on premise or cloud. EastBanc Technologies partners with p...
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 18th International CloudExpo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that BMC Software has been named "Siver Sponsor" of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. BMC is a global leader in innovative software solutions that help businesses transform into digital enterprises for the ultimate competitive advantage. BMC Digital Enterprise Management is a set of innovative IT solutions designed to make digital business fast, seamless, and optimized from mainframe to mo...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, wh...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discuss how businesses can gain an edge over competitors by empowering consumers to take control through IoT. We'll cite examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He'll also highlight how IoT can revitalize and restore outdated business models, making them profitable...
The essence of data analysis involves setting up data pipelines that consist of several operations that are chained together – starting from data collection, data quality checks, data integration, data analysis and data visualization (including the setting up of interaction paths in that visualization). In our opinion, the challenges stem from the technology diversity at each stage of the data pipeline as well as the lack of process around the analysis.
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
Designing IoT applications is complex, but deploying them in a scalable fashion is even more complex. A scalable, API first IaaS cloud is a good start, but in order to understand the various components specific to deploying IoT applications, one needs to understand the architecture of these applications and figure out how to scale these components independently. In his session at @ThingsExpo, Nara Rajagopalan is CEO of Accelerite, will discuss the fundamental architecture of IoT applications, ...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty ...
As cloud and storage projections continue to rise, the number of organizations moving to the cloud is escalating and it is clear cloud storage is here to stay. However, is it secure? Data is the lifeblood for government entities, countries, cloud service providers and enterprises alike and losing or exposing that data can have disastrous results. There are new concepts for data storage on the horizon that will deliver secure solutions for storing and moving sensitive data around the world. ...
18th Cloud Expo, taking place June 7-9, 2016, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some...
SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from Web startups to global enterprises. SoftLayer's modular architecture, full-featured API, and sophisticated automation provide unparalleled performance and control. Its flexible unified platform seamlessly spans physical and virtual devices linked via a world...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, will provide an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life ...
In his session at 18th Cloud Expo, Bruce Swann, Senior Product Marketing Manager at Adobe, will discuss how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects). Bruce Swann has more than 15 years of experience working with digital marketing disciplines like web analytics, social med...
Join us at Cloud Expo | @ThingsExpo 2016 – June 7-9 at the Javits Center in New York City and November 1-3 at the Santa Clara Convention Center in Santa Clara, CA – and deliver your unique message in a way that is striking and unforgettable by taking advantage of SYS-CON's unmatched high-impact, result-driven event / media packages.