Click here to close now.


Linux Containers Authors: Pat Romanski, Carmen Gonzalez, Elizabeth White, Jayaram Krishnaswamy, Chris Fleck

Related Topics: Apache, Linux Containers

Apache: Article "Blast from the Past" No.1: Paul Murphy on Apache & Plan 9 "Blast from the Past" No.1: Paul Murphy on Apache & Plan 9

How Apache & Plan 9 will defeat Microsoft's Passport:
Dominance of Apache means Microsoft won't have the votes to enforce its approach

(, 22 September 2003) ¡ª Linux gets a lot of press these days, but much of it appears condescending and is more about the phenomenon of its emergence and growth than it is about the value and use of the technology. That may be about to change, and for the better.

As a group, the so-called "mainstream press" often appears to favor Microsoft and show an appalling lack of technical depth in its enthusiastic repetition of the latest Microsoft press release. There¡¯s been a lot of speculation on why this is and whether it even happens. So far, no definitive research provides answers one way or the other.

I think there are two explanations. First, the press is largely the victim of a Microsoft marketing strategy that plays off of the social rules people learn in high school: Money and social skills define the in-crowd, and only nerds kvetch about the importance of better technology. Second, editors are part of a feedback loop, reflecting the views of their readers in the decisions they make on content and wording. Their perception of their market as a shallow Microsoft PC market therefore leads them to favor cheerleading over analysis because doing so sells more advertising, more subscriptions, and more advertised product.

What happens when journalists find themselves reporting the legal and personal risks associated with Microsoft¡¯s Passport can be easily avoided by adopting better technology from the open-source world? The normal legal standard for judging the adequacy of professional services ¡ª such as those involved in setting up an e-commerce site ¡ª is consistency with the "best" or industry-wide practices. What does the mainstream press do when that standard is largely set by the 66 percent of Web administrators who use Apache and open source? I don¡¯t know, but I think we¡¯re about to find out courtesy of Plan 9¡¯s pending victory over the X-files in the matter of single sign-ons and network authentication.

Single sign-on then & now

In the early days of Unix, resource access and authentication were not problems. Users signed on to the VAX and promptly maxed out the resources for which they were authorized. That worked well... until organizations got a second Unix machine in the door and people who needed two logins complained about the complexity of remembering two passwords.

Microsoft's consent decree
From: E-Legal: Microsoft Enters Into FTC Privacy Consent Decree, by Eric Sinrod (

"On Aug. 8, Microsoft entered a 20-year consent order with the Federal Trade Commission with respect to alleged failures of its Passport authentication service to protect the privacy and security of personal information. Hopefully the consent order will result in ensuring the security of personal information.
"Second, Microsoft is required to establish and maintain a comprehensive information security program in writing that is reasonably designed to protect the security of personally identifiable information."

None of the solutions to this problem were as good as they should have been. Kludges like NIS+ and FNS could be made to work for as long as the sysadmins wore their lucky underwear, but these fixes were never exactly stirring advertisements for the simplicity and user-focus of Unix-design ideas. rhost and the related facilities used with X11 were, on the other hand, both easy to use and wonderfully effective from a user perspective, but largely unacceptable to security-minded system managers.

Today, the single sign-on problem has escaped the back rooms to become a front-burner competitive issue. Microsoft¡¯s Passport service attempts to deliver a single sign-on solution for an essentially unlimited number of Windows users accessing Windows servers, while the Liberty Alliance tries to play catch-up from the competitive side of the systems world.

Passport brilliantly combines the kludgey and unstable nature of NIS+ with the insecurity of the trusted hosts concept to produce a nine-step process with obvious opportunities (See, for example, Risks of the Passport Single Signon Protocol) for security and other abuses:

  1. A Passport user requests a secure page from a Passport partner
  2. The Passport partner redirects the page request back to the user¡¯s browser
  3. Which redirects it back to its authorized Passport server
  4. Which uses a three-step challenge/response approach to authenticating pre-registered users
  5. After which it redirects the now authenticated user back to the Passport partner site
  6. Which instructs the user¡¯s browser to write an authentication cookie to the user¡¯s PC
  7. Whose presence then authenticates that PC to other Passport partner sites.

You might assume Rube Goldberg concocted the design. In reality, things got this way through an evolutionary process that, in retrospect, looks as logical and inevitable as a slow-motion train-wreck in a B movie.

The approach favored by the Liberty Alliance is markedly better, with fewer steps and no monopoly on holding or managing authentication information. That said, it only looks good when you compare it to Passport. Consider it independently and you might feel a touch of nostalgia for the days when federated naming-services provided the backbone for federated identity-services.

What¡¯s going to become of the conflict between the empire¡¯s Passporters and the heroic rebels of the Liberty Alliance? I think Sun¡¯s going to make both sides of this debate obsolete by introducing some old technology that solves the problem elegantly.

Although Sun is the company that popularized the notion "the network is the computer," Sun has not yet made this as transparent as it should be. Sun developed NFS as part of its first-generation OS, and this helped a lot (particularly within trusted communities), but it also required central control of shared resources. Solaris 2.0 focused more on adding scalability and reliability than on extending Unix out of the box and across the network. Solaris 2.9, the current release, contains many single-identity tools, but they¡¯re all add-ons to the basic OS rather than being truly integrated with it.

I think that Solaris 3.0 will change all that by adopting a bunch of user- and resource-authentication ideas from Lucent Technologies' Bell Lab¡¯s Plan 9. More importantly, I think that Sun¡¯s actions will give those ideas, already available to the open source community, new impetus and lead to a battle for Webshare as Linux and BSD Apache administrators decide between joining up with Microsoft and the X-files or taking off for outer space and Plan 9.

XML's roots go back to 1957

Like Passport, is distantly based on a family of extensions to XML. I think of these extensions as the X-files; even Bishop Occam would want to blame vast government cover-ups of alien takeovers to explain the weird stuff we encounter in search of an explanation for Passport and ideas down the XML rabbit hole.

XML started out as a sort of simplified SGML (Structured General Markup Language, a 1983 ANSI standard) and originally inherited many of SGML¡¯s key characteristics.

SGML defines how document-markup should be structured and unifies related ideas from both the printing and computing perspectives. On the editorial (or printing) side, SGML got its start the day after Gutenberg¡¯s invention of movable type made it necessary to formalize editorial instructions to typesetters. From this perspective, SGML¡¯s tags were instructional in nature, as in "start using 42 lines per page here".

An exonym is a group or geographical label applied by outsiders to a group or region but which the inhabitants or members of the group do not themselves use. Xlang is Microsoft¡¯s exonym for a bunch of extensions to XML which have the effect of turning it into a two-way communications infrastructure for Web programming.

Microsoft¡¯s John Montgomery makes a big deal out of this in an interview reported on We see XML as being kind of the next level of technology that¡¯s going to come along and provide a universal description format for data on the Web, and what this is going to enable, from our perspective, is Web programmability.

One context in which to view this use of Xlang as a Web-based messaging interface is Chris Paget¡¯s recent demonstration of fundamental ¡ª and probably non-repairable ¡ª security defects in Windows¡¯ internal-messaging interface.

On the computing-practices side, SGML¡¯s roots only go back to about 1957. It was in this year that Rand Corp. made its first attempts to implement the COLEX text retrieval system, a development that led to the 1967 commercial release of SDC Dialog (probably (?) the first public-network-based information-service). COLEX was aimed at helping the U.S. Air Force sort through hundreds of thousands ¡ª maybe even millions ¡ª of technical documents, and it needed some way to differentiate text by type. As a result, COLEX tags were descriptive as in: TITLE: some title text :END_TITLE.

A third type of tag, combining formatting information with procedural information, was pioneered in early '60s MIT products like RUNOFF (which begat troff and ditroff). These tags were intentionally eschewed by the committee because SGML was intended to describe document markup, not document processing.

The SGML specification defines two types of information labeling:

  1. data identification
  2. presentation formatting
It does not say anything about data processing; for that you need an application that can interpret and act on SGML markup. That interpreter, in turn, has to drive some kind of output application that puts ink on paper or pixels on screens.

Consequently, the rigid separation of markup information from procedural information means that actual use of SGML needs three things:

  1. You need to define what your tags are, what actions they translate to, and to what degree, if any, they can be nested. That set of definitions then constitutes the SGML document type to be produced when a document marked up using those tags is processed for formatting and is called, logically enough, a document type definition (DTD)
  2. An application that can interpret the markup and combine it with the document itself to produce output suitable for use as input to a rendering engine
  3. A graphics-output or rendering engine to produce the printed or displayed document.

I¡¯m still confused but...
In this context some readers may recall my problems a few weeks ago differentiating NextStep¡¯s use of PostScript and the use of PDF within MacOS X. PostScript is a procedural programming language; PDF combines markup with document content and shares the PostScript page imaging model and vocabulary.

NeXTStep used PostScript in both roles: as markup information in files and to process the resulting combined files for screen or print output. That, of course, works extremely well and provides such clean and consistent output that habituation to its use tends to blind the user to problems with other, less functional, display methodologies.

In Robert Calliau¡¯s introduction to the Lie and Boss book Cascading Style sheets: Designing for the Web (2nd Edition, Addison Wesley, 1999), he discusses Tim Berners-Lee¡¯s work on developing HTML and points out that this took place on NeXT machines with PostScript-based displays. Calliau makes a comment about stylesheets becoming programming languages that qualifies as prescient in the context of what¡¯s been happening with XML recently. In the young Web there were no more pagination faults, no more footnotes, no silly word breaks, no fidgeting the text to gain that extra line you sorely needed to fit everything on one page. In the window of a Web page on the NeXTStep system the text was always clean. (...)

Then we descended into the dark ages because the Web exploded into a community that had no idea such freedom was possible but worried about putting on the remote screen exactly what they thought their information should look like. (...)

Fortunately, SGML¡¯s philosophy allows us to separate structure from presentation and the Web DTD, HTML, is no exception, Even in the NeXTStep version of 1990, Tim Berners-Lee provided for style sheets. (...)

I¡¯ve always had one concern: is it possible to create a powerful enough style sheet "language" without ending up with a programming language?

The important thing here is that all of this is non-procedural: the markup tells the rendering engine what to do but not how to do it. In fact, the original ANSI committee made a special point of not including another computing tradition ¡ª that of fully integrated markup and processing languages like TROFF/TMac or the later LaTeX.

In general, the document-preparation workflow envisaged in SGML is:

  1. Someone loads or creates the document source text
  2. Someone adds formatting and presentation information using a DTD (markup language) like HTML
  3. The completed document is stored
  4. On request, the markup language is interpreted by a transformer application which outputs graphics commands for a rendering engine
  5. The rendering application interprets the graphics commands to create the user readable output on screen or paper.

Notice again that the only executables here are the transformer and rendering applications. The markup language is interpreted by the transformer and rendered by the graphics engine, but the markup language does not itself take on the attributes of a programming language and does not contain executable code.

How well this works in terms of final product quality depends in large part on the quality with which the output is rendered, something which itself depends on both the rendering application and the physical technology used.

The HTML DTD does not offer much direct formatting control; an HTML page displayed using IE on a PC with default fonts, borders, and window sizes will look very different than that same page displayed under Konqueror. What¡¯s going on is that each browser has what amounts to an internal stylesheet that determines how text marked up with a format label like <EM> is actually rendered in the local graphics environment.

Cascading stylesheets bring better control where the page meets the PC screen by providing explicit rendering instructions to replace these default choices. For example, the browser default is to show something tagged <H1> somewhat more than three font sizes bigger, but in the same color as, something tagged <P> but

<STYLE TYPE=text/css>
H1 { color:blue }

over-rides the default stylesheet to add the instruction that text presented between <H1> tags should also be rendered in blue.

Since a document can contain more than one set of rules either directly or by reference, some complexities arise in deciding which rules apply. In the official CSS specification, those inheritance rules are executed by sorting through presentation rules to find the nearest one not overridden by an "important" label attached to an instruction in a higher level stylesheet. This is a strategy roughly analogous to letting the person whose shouts sound loudest win the argument.

Graphically, this process can be presented as an inverted tree with formatting authority cascading down it to the lowest applicable level; hence, eventually, some more X-files: including Xpaths, Xlinks, Xschemas (done with the eXensible Stylesheet Definition Language [XSDL] or just .XSD in DOS), and, more recently, XMLNS or XML name-space files.

When work started in 1996 on yet another SGML DTD, to be known as XML, the need for stylesheets was a well-established part of commercial reality. Two additional standards, often grouped together under the name XSLFO (Extensible Stylesheet Language, Format Objects) and reasonably considered generalizations of the stylesheet concept, were co-developed with the XML specification to accommodate this.

The latter of these control how XML documents are transformed to produce documents that can be rendered by standard engines such as browsers:

XML Document ¡ú Transformer
acts on XSL RULES
¡ú HTML document

Defining an XML DTD

In defining an XML DTD, you create and then tag the tags. i.e., you:
  1. Define the label tags that will be used to label content in documents of this type; and,
  2. Then tag those tags with presentation information to control how that content will be presented.

In use, this produces at least an XML document containing the labels, an XMLNS (XML Name Space) document containing the definitions, and an XSL document containing the presentation information for use by the output formatter.

This set of solutions met the needs of large numbers of people for controlled document structure and presentation. As a result a number of XML DTDs were quickly standardized, including one I¡¯ve been working with, the XBRL specification for an extensible business reporting language and one many people have been working with, Microsoft¡¯s XML definitions for files produced by Microsoft Office.

Except, excuse, or excommunicate?
A report by Alex Gantman on the Neohapsis security track suggests that digitally signed Microsoft Office documents can be tampered with easily: I have stumbled onto a potential security issue in Microsoft Word. In both cases, the adversary (mis)uses fields to perpetrate the attack. It¡¯s important to note that fields are not macros and, as far as I know, cannot be disabled by the user.

Because an INCLUDETEXT statement is part of the hash calculation but what it fetches isn¡¯t (that happens at read time), he suggests you can change the included text after doing the encryption without affecting the apparent integrity of the digital signature.

One of the side effects of Microsoft¡¯s 1998 decision to embrace XML was its immediate extension to provide access to procedural elements. Starting with ActiveX this has expanded to include various document, or common, object models (DOM/COM) and, most recently, SOAP. The Simple Object Access Protocol was originally intended to provide RPC like services that bypass firewalls by using port 80 with HTTP but is now being extended, via the Web services definition language (WSDL) to allow for more general forms of communications.

By taking XML across the gap from markup to procedural language, Microsoft made file interchange and information use both easier for Windows developers and more dangerous for users. After all, an XML file is still just a text file that anyone can edit whether it contains procedural information or not.

For example, the extensions made the following possible in an XML document:

<![CDATA[ Virus=new ActiveXObject("WScript.Shell");
Virus.Run("%systemroot%\\SYSTEM32\\CMD.EXE /C DIR C:\ps");]]>
(Note: this example for Microsoft Excel is from$el2.html except that "virus" is not spelled out in the original. This code apparently works with the more recent MSXML4/5.DLL parsers for the major current (mid July, 2002) releases of Windows 2000 and Windows/XP. Also see¡¯s description of an MSXML.dll exploit with respect to SQL-Server 2000 that can allow the execution of arbitrary code by a remote attacker.)

Encryption to the rescue!

Obviously, that raises a problem. Let's say someone sends you a PowerPoint document saved as XML. Should you load it? Delete it? Read the XML file looking for external executable references?

More generally, how do you know:

  1. That a document you receive from a sender has not been changed by someone else? or,
  2. That the sender will neither deny having sent the document nor claim that you, or anyone else, could have modified it en-route?

The technology needed to assure a document recipient that it originates with the ostensible sender and has not been tampered with uses the XML digital signature and encryption standards. These describe how encryption can be used to authenticate documents by defining what is enciphered, how that is done, and how the results are represented in an XML document.

If it quacks like a duck, walks like duck, and looks like a duck, should it have teeth?
The Trusted Computing Platform Alliance (TCPA) looks a lot like an open specification process at work generating the consensual basis for Microsoft¡¯s Palladium infrastructure. The TCA folks, whose Web site is not fully accessible to users of Netscape 4.76 on Solaris and who don¡¯t allow just anybody to see past the "Organization" header on their front page, carry the digital signature idea forward into hardware and have produced an interesting, if somewhat frightening, 332-page main specification whose implementation would render Passport¡¯s cookies obsolete.

If you do a Google search using just "palladium" you don¡¯t find a lot of positive commentary. On the other hand, it could just be the hardware complement needed to enforce new terms that seem to be entering Windows end user licensing. For example, recent Windows XP Service Pack 1 and Windows 2000 Service Pack 3 licenses state that: "You acknowledge and agree that Microsoft may automatically check the version of the OS Product and/or its components that you are utilizing and may provide upgrades or fixes to the OS Product that will be automatically down loaded to your computer,"

One of those components is, I imagine, illustrated in the (made-up) XML excerpt below:

<?xml version="1.0"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
<registration description="E2KXPSP2" progid="XP803AC54C" version="1.09"
<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
<LocationOfComponents HRef="file:///H:\2KApps\MSOffice%20XP'/>

Using XML, without or without hardware encoded keys, to enforce licensing might make sense in both the Microsoft and DCMA contexts (although active copy protection built into the distribution medium would be smarter) but the direct threat to Linux here is that a Windows user who needs to interoperate with a user of open source software may become unable to do so because the XML registration tags written by won¡¯t check out with Passport.

Specifically, the encipherment is handled via PKI ¡ª Public Key Infrastructure on the RSA model. The underlying encryption methodology is clearly explained by Ed Simon, Paul Madsen, and Carlisle Adams in their An Introduction to XML Digital Signatures on the site.

The key point is that two separate keys are used such that, as Simon et al put it, "a cryptographic transformation encoded with one key can only be reversed with the other."

These keys are related via a hypothetical mathematical construct known as a one-way function. In these, the computational cost of creating two keys is trivial but the computational cost of finding the second key from knowledge of the first is thought to be very high. Thus a PKI user can publish one key while keeping the other secret, thereby creating a situation in which the ability to decrypt something with the public key asserts that it was encrypted with the private key and, by extension, can only be the work of the only holder of that private key. This therefore ensures that the sender cannot repudiate the encrypted data and so amounts to a digital signature.

Similarly, a sender can use the recipient¡¯s public key to encrypt data knowing that only the recipient¡¯s private key can be used to decrypt it, thereby ensuring the privacy of the message. With PKI, senders and receivers can exchange public keys and so enter into a secure, signed, exchange. Neither side can know, however, who the other is unless some third party previously attests to both identities. As a result various certification authorities have evolved on the Web to certify that the identities involved are as represented and, at the cost of an additional pair of PKI encoded digital information transactions, both sender and receiver can be reasonably assured of the other¡¯s real identity.

The normal method for validating a digital document¡¯s internal integrity is to record a hash value (a mathematical or heuristic representation of all, or part, of the document in a single, usually short, string of text or a single number) and then to encrypt and transmit that hash value as a "digital digest." Recalculation of the hash value on document receipt and its comparison with the decrypted value is then expected to show whether the document has been tampered with because a content change will result in computation of a different hash, or digital digest, value.

The combination of XML with embedded digital signatures allows information suppliers to assure their customers that the documents they get are authentic and unmodified by third parties. In other words, if that hypothetical PowerPoint document contained a digital signature, and you had the software to verify it, and both the hash and key match checked out, then you would be certain that the document came from the ostensible sender and had not been modified en-route. Equally importantly, if it turned out to contain a virus, the sender would be forced to acknowledge responsibility for that since you could prove who it came from and that the problem existed when the sender affixed the digital signature and thus before the document was sent to you.

These ideas can, of course, be applied in other ways to other problems. It¡¯s not a long step, for example, from using digital signature standards to authenticate documents for both sender and content to applying the same ideas to authenticate almost any kind of information exchange, including that needed for a single sign-on system, for remote procedure call authentication, or in XML documents that distribute a user¡¯s credit balance or other personal details to third parties.

Liberty Alliance

On the open side of the ledger, these ideas pour directly into the XNS (eXtensible Name Service) attempt to specify a vendor-neutral digital identity infrastructure. On the proprietary side, however, they underlie what became Microsoft¡¯s Passport Service.

One of the responses to Passport is known as the Liberty Alliance a Sun-inspired effort to produce a genuinely open and interoperable single sign-on and authentication standard.

The Liberty Alliance released its 1.0 Federated Network Identification and Authorization specification on July 11th 2002. In many ways, this is both a simplification and a generalization of the ideas behind Passport but without the proprietary overtones and single point of control characterizing the Microsoft solution.

One of the most interesting things about this specification is its use of SAML (Security Assertion Markup language) to define and control the messaging structures used in an actual implementation of the specification. Full details, including protocols and the SAML schemas needed, are available at but, basically, the liberty specification handles authorization in a three-stage process with all communications structured via SAML and flowing through the user¡¯s browser or other software agent.

Microsoft, which had previously announced planned upgrades in its Kerberos derived security for Passport also announced, on July 16th, 2002, its intention to embrace SAML.

As currently defined, however, SAML is faithful to the distinctions that went into the SGML specification back in 1983 and therefore does not include procedural elements. Thus, SAML is used in the liberty specification as a technology agnostic way of conveying assurance information between two or more procedural applications which, respectively, produce and consume the information.

If Microsoft were serious about its support for SAML as a standard it could, of course, adopt the Liberty Alliance single sign-on and authentication specification for its own use and let Passport, and such extensions of XML as its use to bypass firewalls for "Web programming" and remote services execution, die of their own weight.

Enter from stage left, Plan 9!

That may not strike you as likely any time soon, but stranger things have happened, including the growth of a significant following for Plan 9, the movie, and its eponymous influence on the Liberty Alliance specification.

When Ed Wood Jr. put his ideas about film making into Plan 9 from Outer Space the result became a cult classic defining an entire genre of B movies. When Rob Pike and his colleagues at AT&T Bell Labs first defined the Plan 9 operating system many of their ideas seemed to be from outer space. The linkage to the Dantesque horrors of Plan 9 led to such an inexhaustible source of bad puns and in jokes that I half expect to see Sun release SunOS 3.0 on a Good Friday.

In operation Plan 9 looks a lot like Unix but it is quite different internally in that the original design took many Unix ideas for single machine environments and re-thought them for fully distributed, multiple-machine environments. Key among these is the link between user and machine. In Unix, a user authorization is defined fundamentally with respect to the resources available on a specific machine. In Plan 9, user authorizations are defined for a distributed virtual machine consisting of many physical machines.

Thus, Plan 9 user services present as hierarchical file systems and the machines a user accesses exchange individuality for function. A user may, that is, access a service such as program execution on a CPU server without needing to know anything about that machine in terms of where it is, who owns it, what kind of CPU(s) it has, or what other resources may be available to it locally.

Within the current releases of Plan 9 the core user authentication functions are handled by an agent called factotum that handles all security interactions on the user¡¯s behalf: instruct factotum on the clearances you have and any service requiring authentication can query it, instead of you, to determine what to do about the request.

Technically, this has the enormous benefit of taking the entire cryptographic exchange burden out of the hands of both users and application designers. Managerially, it completely avoids most of the complexities associated with supporting large numbers of users in many-owner, single sign-on environments.

Showing off, a Friulian slip?
The first implementations relied on Gnots for display purposes. The Gnot was a true smart display running the 8? window manager. It was a custom-built, MC68020-based terminal with a big screen and powerful bit-mapped graphics intended to rigorously separate display from file or CPU functions.

In its simplest form, you authenticate at the local level, instruct factotum on dealing with authentication queries, and it works with the network operating system and factotum aware applications to automatically recognize that authentication anywhere the system operates.

Given that the factotum implementation is rigorously based on a mathematical representation of the authentication problem, can use multiple encryption methods independently, and is operationally quite simple, it is likely to be extremely difficult to subvert.

Factotum is considerably simpler in concept and more robust in implementation than the protocol and strategy produced by the Liberty Alliance. The two do, however, bear a vague relationship. It is perhaps what you¡¯d get if some members of the committee putting together the liberty protocol looked at Passport to see what errors to avoid while others remembered reading about Plan 9¡¯s authentication solution. Read the Liberty documentation carefully, recognize that the alliance specification has to work over a much larger and more complex set of ownership, control, and technical interactions, and the two sets of ideas start to look like cousins.

Commercial realities?
A relative, the Andrew file system from Carnegie Mellon is in widespread use.

Another closely related technology, that of the plugable authentication module, has long been available on Linux.

The Kerberos reference page provides links to information about the original MIT/CMU product.

One of the links between them is in an alliance specification concept called "circles of trust" that would map rather well to global or enterprise-wide Plan 9 implementations if those existed in commercial reality. In such an environment, the network really would be the computer ¡ª and that¡¯s a key reason I expect Solaris 3.0 to incorporate this functionality as it absorbs more and more of the Plan 9 idea set to deliver truly distributed access to computing resources. If it happens, that will provide a powerful commercial reason for people to adopt these ideas and thus create additional impetus behind their adoption in the general open source world.

If so, we¡¯ll have a very clear cut battle for market dominance between Microsoft and open source ideas on single sign-on:

  • The X-files pile complexity on improbable foundations to derive Passport from SGML, while
  • Plan 9 represents the evolution of Unix through simplification and the re-thinking of very basic design ideas.

In the old days of Windows dominance, the outcome would have been a no-brainer. When Microsoft pointed its checkbook, people surrendered. However, that world is rapidly going away. Now ordinary users mutter about security, governments look at Windows-brand products as national security risks, and lawyers gear up to feast at the tort table as the courts start to enforce our liability for losses to clients whose information we abuse.

With factotum in play, legally "accepted industry practices" may soon no longer include Passport and trying to make a quick change back to Firefly¡¯s original ideas or the MSN wallet solutions will probably just make it all of this worse for Microsoft. Why? because "accepted industry practices" in an unregulated industry are set by a kind of majority vote. The dominance of the Apache toolset means that Microsoft won¡¯t have the votes to enforce its approach over technologists¡¯ objections or in spite of customers¡¯ legal risks.

Open source, on the other hand, continues to gain ground as part of the solution. A rapid public win by factotum over Passport may be enough to flip attitudes in the mainstream press from near automatic approval of Microsoft press releases to due cynicism.

More Stories By Apache News Desk

Apache News Desk trawls the world's news information sources and brings you timely updates on the Apache Software Foundation community of open-source software projects, Ant, Beehive, Cocoon, Harmony, Jakarta, Maven, and Tomcat.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermi...
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!
Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes about through a Communications Platform as a Service which allows for messaging, screen sharing, video...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.