Click here to close now.


Linux Authors: Jason Bloomberg, Yeshim Deniz, Liz McMillan, Carmen Gonzalez, Pat Romanski

Related Topics: Apache, Linux

Apache: Article "Blast from the Past" No.1: Paul Murphy on Apache & Plan 9 "Blast from the Past" No.1: Paul Murphy on Apache & Plan 9

How Apache & Plan 9 will defeat Microsoft's Passport:
Dominance of Apache means Microsoft won't have the votes to enforce its approach

(, 22 September 2003) ¡ª Linux gets a lot of press these days, but much of it appears condescending and is more about the phenomenon of its emergence and growth than it is about the value and use of the technology. That may be about to change, and for the better.

As a group, the so-called "mainstream press" often appears to favor Microsoft and show an appalling lack of technical depth in its enthusiastic repetition of the latest Microsoft press release. There¡¯s been a lot of speculation on why this is and whether it even happens. So far, no definitive research provides answers one way or the other.

I think there are two explanations. First, the press is largely the victim of a Microsoft marketing strategy that plays off of the social rules people learn in high school: Money and social skills define the in-crowd, and only nerds kvetch about the importance of better technology. Second, editors are part of a feedback loop, reflecting the views of their readers in the decisions they make on content and wording. Their perception of their market as a shallow Microsoft PC market therefore leads them to favor cheerleading over analysis because doing so sells more advertising, more subscriptions, and more advertised product.

What happens when journalists find themselves reporting the legal and personal risks associated with Microsoft¡¯s Passport can be easily avoided by adopting better technology from the open-source world? The normal legal standard for judging the adequacy of professional services ¡ª such as those involved in setting up an e-commerce site ¡ª is consistency with the "best" or industry-wide practices. What does the mainstream press do when that standard is largely set by the 66 percent of Web administrators who use Apache and open source? I don¡¯t know, but I think we¡¯re about to find out courtesy of Plan 9¡¯s pending victory over the X-files in the matter of single sign-ons and network authentication.

Single sign-on then & now

In the early days of Unix, resource access and authentication were not problems. Users signed on to the VAX and promptly maxed out the resources for which they were authorized. That worked well... until organizations got a second Unix machine in the door and people who needed two logins complained about the complexity of remembering two passwords.

Microsoft's consent decree
From: E-Legal: Microsoft Enters Into FTC Privacy Consent Decree, by Eric Sinrod (

"On Aug. 8, Microsoft entered a 20-year consent order with the Federal Trade Commission with respect to alleged failures of its Passport authentication service to protect the privacy and security of personal information. Hopefully the consent order will result in ensuring the security of personal information.
"Second, Microsoft is required to establish and maintain a comprehensive information security program in writing that is reasonably designed to protect the security of personally identifiable information."

None of the solutions to this problem were as good as they should have been. Kludges like NIS+ and FNS could be made to work for as long as the sysadmins wore their lucky underwear, but these fixes were never exactly stirring advertisements for the simplicity and user-focus of Unix-design ideas. rhost and the related facilities used with X11 were, on the other hand, both easy to use and wonderfully effective from a user perspective, but largely unacceptable to security-minded system managers.

Today, the single sign-on problem has escaped the back rooms to become a front-burner competitive issue. Microsoft¡¯s Passport service attempts to deliver a single sign-on solution for an essentially unlimited number of Windows users accessing Windows servers, while the Liberty Alliance tries to play catch-up from the competitive side of the systems world.

Passport brilliantly combines the kludgey and unstable nature of NIS+ with the insecurity of the trusted hosts concept to produce a nine-step process with obvious opportunities (See, for example, Risks of the Passport Single Signon Protocol) for security and other abuses:

  1. A Passport user requests a secure page from a Passport partner
  2. The Passport partner redirects the page request back to the user¡¯s browser
  3. Which redirects it back to its authorized Passport server
  4. Which uses a three-step challenge/response approach to authenticating pre-registered users
  5. After which it redirects the now authenticated user back to the Passport partner site
  6. Which instructs the user¡¯s browser to write an authentication cookie to the user¡¯s PC
  7. Whose presence then authenticates that PC to other Passport partner sites.

You might assume Rube Goldberg concocted the design. In reality, things got this way through an evolutionary process that, in retrospect, looks as logical and inevitable as a slow-motion train-wreck in a B movie.

The approach favored by the Liberty Alliance is markedly better, with fewer steps and no monopoly on holding or managing authentication information. That said, it only looks good when you compare it to Passport. Consider it independently and you might feel a touch of nostalgia for the days when federated naming-services provided the backbone for federated identity-services.

What¡¯s going to become of the conflict between the empire¡¯s Passporters and the heroic rebels of the Liberty Alliance? I think Sun¡¯s going to make both sides of this debate obsolete by introducing some old technology that solves the problem elegantly.

Although Sun is the company that popularized the notion "the network is the computer," Sun has not yet made this as transparent as it should be. Sun developed NFS as part of its first-generation OS, and this helped a lot (particularly within trusted communities), but it also required central control of shared resources. Solaris 2.0 focused more on adding scalability and reliability than on extending Unix out of the box and across the network. Solaris 2.9, the current release, contains many single-identity tools, but they¡¯re all add-ons to the basic OS rather than being truly integrated with it.

I think that Solaris 3.0 will change all that by adopting a bunch of user- and resource-authentication ideas from Lucent Technologies' Bell Lab¡¯s Plan 9. More importantly, I think that Sun¡¯s actions will give those ideas, already available to the open source community, new impetus and lead to a battle for Webshare as Linux and BSD Apache administrators decide between joining up with Microsoft and the X-files or taking off for outer space and Plan 9.

XML's roots go back to 1957

Like Passport, is distantly based on a family of extensions to XML. I think of these extensions as the X-files; even Bishop Occam would want to blame vast government cover-ups of alien takeovers to explain the weird stuff we encounter in search of an explanation for Passport and ideas down the XML rabbit hole.

XML started out as a sort of simplified SGML (Structured General Markup Language, a 1983 ANSI standard) and originally inherited many of SGML¡¯s key characteristics.

SGML defines how document-markup should be structured and unifies related ideas from both the printing and computing perspectives. On the editorial (or printing) side, SGML got its start the day after Gutenberg¡¯s invention of movable type made it necessary to formalize editorial instructions to typesetters. From this perspective, SGML¡¯s tags were instructional in nature, as in "start using 42 lines per page here".

An exonym is a group or geographical label applied by outsiders to a group or region but which the inhabitants or members of the group do not themselves use. Xlang is Microsoft¡¯s exonym for a bunch of extensions to XML which have the effect of turning it into a two-way communications infrastructure for Web programming.

Microsoft¡¯s John Montgomery makes a big deal out of this in an interview reported on We see XML as being kind of the next level of technology that¡¯s going to come along and provide a universal description format for data on the Web, and what this is going to enable, from our perspective, is Web programmability.

One context in which to view this use of Xlang as a Web-based messaging interface is Chris Paget¡¯s recent demonstration of fundamental ¡ª and probably non-repairable ¡ª security defects in Windows¡¯ internal-messaging interface.

On the computing-practices side, SGML¡¯s roots only go back to about 1957. It was in this year that Rand Corp. made its first attempts to implement the COLEX text retrieval system, a development that led to the 1967 commercial release of SDC Dialog (probably (?) the first public-network-based information-service). COLEX was aimed at helping the U.S. Air Force sort through hundreds of thousands ¡ª maybe even millions ¡ª of technical documents, and it needed some way to differentiate text by type. As a result, COLEX tags were descriptive as in: TITLE: some title text :END_TITLE.

A third type of tag, combining formatting information with procedural information, was pioneered in early '60s MIT products like RUNOFF (which begat troff and ditroff). These tags were intentionally eschewed by the committee because SGML was intended to describe document markup, not document processing.

The SGML specification defines two types of information labeling:

  1. data identification
  2. presentation formatting
It does not say anything about data processing; for that you need an application that can interpret and act on SGML markup. That interpreter, in turn, has to drive some kind of output application that puts ink on paper or pixels on screens.

Consequently, the rigid separation of markup information from procedural information means that actual use of SGML needs three things:

  1. You need to define what your tags are, what actions they translate to, and to what degree, if any, they can be nested. That set of definitions then constitutes the SGML document type to be produced when a document marked up using those tags is processed for formatting and is called, logically enough, a document type definition (DTD)
  2. An application that can interpret the markup and combine it with the document itself to produce output suitable for use as input to a rendering engine
  3. A graphics-output or rendering engine to produce the printed or displayed document.

I¡¯m still confused but...
In this context some readers may recall my problems a few weeks ago differentiating NextStep¡¯s use of PostScript and the use of PDF within MacOS X. PostScript is a procedural programming language; PDF combines markup with document content and shares the PostScript page imaging model and vocabulary.

NeXTStep used PostScript in both roles: as markup information in files and to process the resulting combined files for screen or print output. That, of course, works extremely well and provides such clean and consistent output that habituation to its use tends to blind the user to problems with other, less functional, display methodologies.

In Robert Calliau¡¯s introduction to the Lie and Boss book Cascading Style sheets: Designing for the Web (2nd Edition, Addison Wesley, 1999), he discusses Tim Berners-Lee¡¯s work on developing HTML and points out that this took place on NeXT machines with PostScript-based displays. Calliau makes a comment about stylesheets becoming programming languages that qualifies as prescient in the context of what¡¯s been happening with XML recently. In the young Web there were no more pagination faults, no more footnotes, no silly word breaks, no fidgeting the text to gain that extra line you sorely needed to fit everything on one page. In the window of a Web page on the NeXTStep system the text was always clean. (...)

Then we descended into the dark ages because the Web exploded into a community that had no idea such freedom was possible but worried about putting on the remote screen exactly what they thought their information should look like. (...)

Fortunately, SGML¡¯s philosophy allows us to separate structure from presentation and the Web DTD, HTML, is no exception, Even in the NeXTStep version of 1990, Tim Berners-Lee provided for style sheets. (...)

I¡¯ve always had one concern: is it possible to create a powerful enough style sheet "language" without ending up with a programming language?

The important thing here is that all of this is non-procedural: the markup tells the rendering engine what to do but not how to do it. In fact, the original ANSI committee made a special point of not including another computing tradition ¡ª that of fully integrated markup and processing languages like TROFF/TMac or the later LaTeX.

In general, the document-preparation workflow envisaged in SGML is:

  1. Someone loads or creates the document source text
  2. Someone adds formatting and presentation information using a DTD (markup language) like HTML
  3. The completed document is stored
  4. On request, the markup language is interpreted by a transformer application which outputs graphics commands for a rendering engine
  5. The rendering application interprets the graphics commands to create the user readable output on screen or paper.

Notice again that the only executables here are the transformer and rendering applications. The markup language is interpreted by the transformer and rendered by the graphics engine, but the markup language does not itself take on the attributes of a programming language and does not contain executable code.

How well this works in terms of final product quality depends in large part on the quality with which the output is rendered, something which itself depends on both the rendering application and the physical technology used.

The HTML DTD does not offer much direct formatting control; an HTML page displayed using IE on a PC with default fonts, borders, and window sizes will look very different than that same page displayed under Konqueror. What¡¯s going on is that each browser has what amounts to an internal stylesheet that determines how text marked up with a format label like <EM> is actually rendered in the local graphics environment.

Cascading stylesheets bring better control where the page meets the PC screen by providing explicit rendering instructions to replace these default choices. For example, the browser default is to show something tagged <H1> somewhat more than three font sizes bigger, but in the same color as, something tagged <P> but

<STYLE TYPE=text/css>
H1 { color:blue }

over-rides the default stylesheet to add the instruction that text presented between <H1> tags should also be rendered in blue.

Since a document can contain more than one set of rules either directly or by reference, some complexities arise in deciding which rules apply. In the official CSS specification, those inheritance rules are executed by sorting through presentation rules to find the nearest one not overridden by an "important" label attached to an instruction in a higher level stylesheet. This is a strategy roughly analogous to letting the person whose shouts sound loudest win the argument.

Graphically, this process can be presented as an inverted tree with formatting authority cascading down it to the lowest applicable level; hence, eventually, some more X-files: including Xpaths, Xlinks, Xschemas (done with the eXensible Stylesheet Definition Language [XSDL] or just .XSD in DOS), and, more recently, XMLNS or XML name-space files.

When work started in 1996 on yet another SGML DTD, to be known as XML, the need for stylesheets was a well-established part of commercial reality. Two additional standards, often grouped together under the name XSLFO (Extensible Stylesheet Language, Format Objects) and reasonably considered generalizations of the stylesheet concept, were co-developed with the XML specification to accommodate this.

The latter of these control how XML documents are transformed to produce documents that can be rendered by standard engines such as browsers:

XML Document ¡ú Transformer
acts on XSL RULES
¡ú HTML document

Defining an XML DTD

In defining an XML DTD, you create and then tag the tags. i.e., you:
  1. Define the label tags that will be used to label content in documents of this type; and,
  2. Then tag those tags with presentation information to control how that content will be presented.

In use, this produces at least an XML document containing the labels, an XMLNS (XML Name Space) document containing the definitions, and an XSL document containing the presentation information for use by the output formatter.

This set of solutions met the needs of large numbers of people for controlled document structure and presentation. As a result a number of XML DTDs were quickly standardized, including one I¡¯ve been working with, the XBRL specification for an extensible business reporting language and one many people have been working with, Microsoft¡¯s XML definitions for files produced by Microsoft Office.

Except, excuse, or excommunicate?
A report by Alex Gantman on the Neohapsis security track suggests that digitally signed Microsoft Office documents can be tampered with easily: I have stumbled onto a potential security issue in Microsoft Word. In both cases, the adversary (mis)uses fields to perpetrate the attack. It¡¯s important to note that fields are not macros and, as far as I know, cannot be disabled by the user.

Because an INCLUDETEXT statement is part of the hash calculation but what it fetches isn¡¯t (that happens at read time), he suggests you can change the included text after doing the encryption without affecting the apparent integrity of the digital signature.

One of the side effects of Microsoft¡¯s 1998 decision to embrace XML was its immediate extension to provide access to procedural elements. Starting with ActiveX this has expanded to include various document, or common, object models (DOM/COM) and, most recently, SOAP. The Simple Object Access Protocol was originally intended to provide RPC like services that bypass firewalls by using port 80 with HTTP but is now being extended, via the Web services definition language (WSDL) to allow for more general forms of communications.

By taking XML across the gap from markup to procedural language, Microsoft made file interchange and information use both easier for Windows developers and more dangerous for users. After all, an XML file is still just a text file that anyone can edit whether it contains procedural information or not.

For example, the extensions made the following possible in an XML document:

<![CDATA[ Virus=new ActiveXObject("WScript.Shell");
Virus.Run("%systemroot%\\SYSTEM32\\CMD.EXE /C DIR C:\ps");]]>
(Note: this example for Microsoft Excel is from$el2.html except that "virus" is not spelled out in the original. This code apparently works with the more recent MSXML4/5.DLL parsers for the major current (mid July, 2002) releases of Windows 2000 and Windows/XP. Also see¡¯s description of an MSXML.dll exploit with respect to SQL-Server 2000 that can allow the execution of arbitrary code by a remote attacker.)

Encryption to the rescue!

Obviously, that raises a problem. Let's say someone sends you a PowerPoint document saved as XML. Should you load it? Delete it? Read the XML file looking for external executable references?

More generally, how do you know:

  1. That a document you receive from a sender has not been changed by someone else? or,
  2. That the sender will neither deny having sent the document nor claim that you, or anyone else, could have modified it en-route?

The technology needed to assure a document recipient that it originates with the ostensible sender and has not been tampered with uses the XML digital signature and encryption standards. These describe how encryption can be used to authenticate documents by defining what is enciphered, how that is done, and how the results are represented in an XML document.

If it quacks like a duck, walks like duck, and looks like a duck, should it have teeth?
The Trusted Computing Platform Alliance (TCPA) looks a lot like an open specification process at work generating the consensual basis for Microsoft¡¯s Palladium infrastructure. The TCA folks, whose Web site is not fully accessible to users of Netscape 4.76 on Solaris and who don¡¯t allow just anybody to see past the "Organization" header on their front page, carry the digital signature idea forward into hardware and have produced an interesting, if somewhat frightening, 332-page main specification whose implementation would render Passport¡¯s cookies obsolete.

If you do a Google search using just "palladium" you don¡¯t find a lot of positive commentary. On the other hand, it could just be the hardware complement needed to enforce new terms that seem to be entering Windows end user licensing. For example, recent Windows XP Service Pack 1 and Windows 2000 Service Pack 3 licenses state that: "You acknowledge and agree that Microsoft may automatically check the version of the OS Product and/or its components that you are utilizing and may provide upgrades or fixes to the OS Product that will be automatically down loaded to your computer,"

One of those components is, I imagine, illustrated in the (made-up) XML excerpt below:

<?xml version="1.0"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
<registration description="E2KXPSP2" progid="XP803AC54C" version="1.09"
<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">
<LocationOfComponents HRef="file:///H:\2KApps\MSOffice%20XP'/>

Using XML, without or without hardware encoded keys, to enforce licensing might make sense in both the Microsoft and DCMA contexts (although active copy protection built into the distribution medium would be smarter) but the direct threat to Linux here is that a Windows user who needs to interoperate with a user of open source software may become unable to do so because the XML registration tags written by won¡¯t check out with Passport.

Specifically, the encipherment is handled via PKI ¡ª Public Key Infrastructure on the RSA model. The underlying encryption methodology is clearly explained by Ed Simon, Paul Madsen, and Carlisle Adams in their An Introduction to XML Digital Signatures on the site.

The key point is that two separate keys are used such that, as Simon et al put it, "a cryptographic transformation encoded with one key can only be reversed with the other."

These keys are related via a hypothetical mathematical construct known as a one-way function. In these, the computational cost of creating two keys is trivial but the computational cost of finding the second key from knowledge of the first is thought to be very high. Thus a PKI user can publish one key while keeping the other secret, thereby creating a situation in which the ability to decrypt something with the public key asserts that it was encrypted with the private key and, by extension, can only be the work of the only holder of that private key. This therefore ensures that the sender cannot repudiate the encrypted data and so amounts to a digital signature.

Similarly, a sender can use the recipient¡¯s public key to encrypt data knowing that only the recipient¡¯s private key can be used to decrypt it, thereby ensuring the privacy of the message. With PKI, senders and receivers can exchange public keys and so enter into a secure, signed, exchange. Neither side can know, however, who the other is unless some third party previously attests to both identities. As a result various certification authorities have evolved on the Web to certify that the identities involved are as represented and, at the cost of an additional pair of PKI encoded digital information transactions, both sender and receiver can be reasonably assured of the other¡¯s real identity.

The normal method for validating a digital document¡¯s internal integrity is to record a hash value (a mathematical or heuristic representation of all, or part, of the document in a single, usually short, string of text or a single number) and then to encrypt and transmit that hash value as a "digital digest." Recalculation of the hash value on document receipt and its comparison with the decrypted value is then expected to show whether the document has been tampered with because a content change will result in computation of a different hash, or digital digest, value.

The combination of XML with embedded digital signatures allows information suppliers to assure their customers that the documents they get are authentic and unmodified by third parties. In other words, if that hypothetical PowerPoint document contained a digital signature, and you had the software to verify it, and both the hash and key match checked out, then you would be certain that the document came from the ostensible sender and had not been modified en-route. Equally importantly, if it turned out to contain a virus, the sender would be forced to acknowledge responsibility for that since you could prove who it came from and that the problem existed when the sender affixed the digital signature and thus before the document was sent to you.

These ideas can, of course, be applied in other ways to other problems. It¡¯s not a long step, for example, from using digital signature standards to authenticate documents for both sender and content to applying the same ideas to authenticate almost any kind of information exchange, including that needed for a single sign-on system, for remote procedure call authentication, or in XML documents that distribute a user¡¯s credit balance or other personal details to third parties.

Liberty Alliance

On the open side of the ledger, these ideas pour directly into the XNS (eXtensible Name Service) attempt to specify a vendor-neutral digital identity infrastructure. On the proprietary side, however, they underlie what became Microsoft¡¯s Passport Service.

One of the responses to Passport is known as the Liberty Alliance a Sun-inspired effort to produce a genuinely open and interoperable single sign-on and authentication standard.

The Liberty Alliance released its 1.0 Federated Network Identification and Authorization specification on July 11th 2002. In many ways, this is both a simplification and a generalization of the ideas behind Passport but without the proprietary overtones and single point of control characterizing the Microsoft solution.

One of the most interesting things about this specification is its use of SAML (Security Assertion Markup language) to define and control the messaging structures used in an actual implementation of the specification. Full details, including protocols and the SAML schemas needed, are available at but, basically, the liberty specification handles authorization in a three-stage process with all communications structured via SAML and flowing through the user¡¯s browser or other software agent.

Microsoft, which had previously announced planned upgrades in its Kerberos derived security for Passport also announced, on July 16th, 2002, its intention to embrace SAML.

As currently defined, however, SAML is faithful to the distinctions that went into the SGML specification back in 1983 and therefore does not include procedural elements. Thus, SAML is used in the liberty specification as a technology agnostic way of conveying assurance information between two or more procedural applications which, respectively, produce and consume the information.

If Microsoft were serious about its support for SAML as a standard it could, of course, adopt the Liberty Alliance single sign-on and authentication specification for its own use and let Passport, and such extensions of XML as its use to bypass firewalls for "Web programming" and remote services execution, die of their own weight.

Enter from stage left, Plan 9!

That may not strike you as likely any time soon, but stranger things have happened, including the growth of a significant following for Plan 9, the movie, and its eponymous influence on the Liberty Alliance specification.

When Ed Wood Jr. put his ideas about film making into Plan 9 from Outer Space the result became a cult classic defining an entire genre of B movies. When Rob Pike and his colleagues at AT&T Bell Labs first defined the Plan 9 operating system many of their ideas seemed to be from outer space. The linkage to the Dantesque horrors of Plan 9 led to such an inexhaustible source of bad puns and in jokes that I half expect to see Sun release SunOS 3.0 on a Good Friday.

In operation Plan 9 looks a lot like Unix but it is quite different internally in that the original design took many Unix ideas for single machine environments and re-thought them for fully distributed, multiple-machine environments. Key among these is the link between user and machine. In Unix, a user authorization is defined fundamentally with respect to the resources available on a specific machine. In Plan 9, user authorizations are defined for a distributed virtual machine consisting of many physical machines.

Thus, Plan 9 user services present as hierarchical file systems and the machines a user accesses exchange individuality for function. A user may, that is, access a service such as program execution on a CPU server without needing to know anything about that machine in terms of where it is, who owns it, what kind of CPU(s) it has, or what other resources may be available to it locally.

Within the current releases of Plan 9 the core user authentication functions are handled by an agent called factotum that handles all security interactions on the user¡¯s behalf: instruct factotum on the clearances you have and any service requiring authentication can query it, instead of you, to determine what to do about the request.

Technically, this has the enormous benefit of taking the entire cryptographic exchange burden out of the hands of both users and application designers. Managerially, it completely avoids most of the complexities associated with supporting large numbers of users in many-owner, single sign-on environments.

Showing off, a Friulian slip?
The first implementations relied on Gnots for display purposes. The Gnot was a true smart display running the 8? window manager. It was a custom-built, MC68020-based terminal with a big screen and powerful bit-mapped graphics intended to rigorously separate display from file or CPU functions.

In its simplest form, you authenticate at the local level, instruct factotum on dealing with authentication queries, and it works with the network operating system and factotum aware applications to automatically recognize that authentication anywhere the system operates.

Given that the factotum implementation is rigorously based on a mathematical representation of the authentication problem, can use multiple encryption methods independently, and is operationally quite simple, it is likely to be extremely difficult to subvert.

Factotum is considerably simpler in concept and more robust in implementation than the protocol and strategy produced by the Liberty Alliance. The two do, however, bear a vague relationship. It is perhaps what you¡¯d get if some members of the committee putting together the liberty protocol looked at Passport to see what errors to avoid while others remembered reading about Plan 9¡¯s authentication solution. Read the Liberty documentation carefully, recognize that the alliance specification has to work over a much larger and more complex set of ownership, control, and technical interactions, and the two sets of ideas start to look like cousins.

Commercial realities?
A relative, the Andrew file system from Carnegie Mellon is in widespread use.

Another closely related technology, that of the plugable authentication module, has long been available on Linux.

The Kerberos reference page provides links to information about the original MIT/CMU product.

One of the links between them is in an alliance specification concept called "circles of trust" that would map rather well to global or enterprise-wide Plan 9 implementations if those existed in commercial reality. In such an environment, the network really would be the computer ¡ª and that¡¯s a key reason I expect Solaris 3.0 to incorporate this functionality as it absorbs more and more of the Plan 9 idea set to deliver truly distributed access to computing resources. If it happens, that will provide a powerful commercial reason for people to adopt these ideas and thus create additional impetus behind their adoption in the general open source world.

If so, we¡¯ll have a very clear cut battle for market dominance between Microsoft and open source ideas on single sign-on:

  • The X-files pile complexity on improbable foundations to derive Passport from SGML, while
  • Plan 9 represents the evolution of Unix through simplification and the re-thinking of very basic design ideas.

In the old days of Windows dominance, the outcome would have been a no-brainer. When Microsoft pointed its checkbook, people surrendered. However, that world is rapidly going away. Now ordinary users mutter about security, governments look at Windows-brand products as national security risks, and lawyers gear up to feast at the tort table as the courts start to enforce our liability for losses to clients whose information we abuse.

With factotum in play, legally "accepted industry practices" may soon no longer include Passport and trying to make a quick change back to Firefly¡¯s original ideas or the MSN wallet solutions will probably just make it all of this worse for Microsoft. Why? because "accepted industry practices" in an unregulated industry are set by a kind of majority vote. The dominance of the Apache toolset means that Microsoft won¡¯t have the votes to enforce its approach over technologists¡¯ objections or in spite of customers¡¯ legal risks.

Open source, on the other hand, continues to gain ground as part of the solution. A rapid public win by factotum over Passport may be enough to flip attitudes in the mainstream press from near automatic approval of Microsoft press releases to due cynicism.

More Stories By Apache News Desk

Apache News Desk trawls the world's news information sources and brings you timely updates on the Apache Software Foundation community of open-source software projects, Ant, Beehive, Cocoon, Harmony, Jakarta, Maven, and Tomcat.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The worldwide cellular network will be the backbone of the future IoT, and the telecom industry is clamoring to get on board as more than just a data pipe. In his session at @ThingsExpo, Evan McGee, CTO of Ring Plus, Inc., discussed what service operators can offer that would benefit IoT entrepreneurs, inventors, and consumers. Evan McGee is the CTO of RingPlus, a leading innovative U.S. MVNO and wireless enabler. His focus is on combining web technologies with traditional telecom to create a new breed of unified communication that is easily accessible to the general consumer. With over a de...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
Cloud is not a commodity. And no matter what you call it, computing doesn’t come out of the sky. It comes from physical hardware inside brick and mortar facilities connected by hundreds of miles of networking cable. And no two clouds are built the same way. SoftLayer gives you the highest performing cloud infrastructure available. One platform that takes data centers around the world that are full of the widest range of cloud computing options, and then integrates and automates everything. Join SoftLayer on June 9 at 16th Cloud Expo to learn about IBM Cloud's SoftLayer platform, explore se...
SYS-CON Media announced today that 9 out of 10 " most read" DevOps articles are published by @DevOpsSummit Blog. Launched in October 2014, @DevOpsSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce softw...
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. He discussed opportunities and challenges ahead for the IoT from a market and technical point of vie...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
15th Cloud Expo, which took place Nov. 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, expanded the conference content of @ThingsExpo, Big Data Expo, and DevOps Summit to include two developer events. IBM held a Bluemix Developer Playground on November 5 and ElasticBox held a Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of Bluemix, its services and functionality and provide short-term introductory projects that developers can complete between sessions.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, a...
Grow your business with enterprise wearable apps using SAP Platforms and Google Glass. SAP and Google just launched the SAP and Google Glass Challenge, an opportunity for you to innovate and develop the best Enterprise Wearable App using SAP Platforms and Google Glass and gain valuable market exposure. In his session at @ThingsExpo, Brian McPhail, Senior Director of Business Development, ISVs & Digital Commerce at SAP, outlined the timeline of the SAP Google Glass Challenge and the opportunity for developers, start-ups, and companies of all sizes to engage with SAP today.
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo – to be held June 9-11, 2015, at the Javits Center in New York City, NY – is now accepting Hackathon proposals. Hackathon sponsorship benefits include general brand exposure and increasing engagement with the developer ecosystem. At Cloud Expo 2014 Silicon Valley, IBM held the Bluemix Developer Playground on November 5 and ElasticBox held the DevOps Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster.
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.