Click here to close now.


Linux Containers Authors: Elizabeth White, Liz McMillan, Carmen Gonzalez, AppDynamics Blog, XebiaLabs Blog

Related Topics: Open Source Cloud, Linux Containers, Eclipse, Server Monitoring, Apache

Open Source Cloud: Article

Making Open Source Software

The world continues to embrace and adopt free and open source licensed software across the board

Software is surprisingly dynamic.  All software evolves.  Bugs are found and fixed.  Enhancements added.  New requirements are discovered in using the software.  New uses are found for it and it is shaped to those new uses.  Software solutions that are useful and used must by their very existence evolve.   Well organized open source software communities create the right conditions to make this dynamism successful.

The world continues to embrace and adopt free and open source licensed software across the board.  Vendors and OEMs, their IT customers, governments and academics are all using, buying and making open source software, and often all three at once.

Using and buying liberally licensed open source software, i.e., consuming such software, are relatively straight forward affairs. You buy a product based on open source licensed software pretty much the way you buy other software, evaluating the company producing the products and services against your own IT requirements and managed procurement risk profiles.  You don't procure Red Hat Linux server software differently than you historically bought Solaris or might buy Microsoft Windows Server systems.

Using open source software (as opposed to buying a product) adds additional considerations based on evaluating the strength of the community around the open source project and the costs of supporting that choice either through the development of in-house expertise (likely supported by joining the project's community) or the hiring of external expertise. You look at a project's how-to documentation and tutorials, forum and email list activity, and IRC channels.  You consider the availability of contracting support from other knowledgeable sources around the community.  These considerations really don't change whether the open source software to be used is tools and infrastructure systems or developer libraries and frameworks.  These considerations scale with use from individuals and the amount of time they have to spend solving their problem all the way up through company IT departments wanting to use open source licensed software and the time and money trade-offs they're willing to make.

Once one starts to make open source software, i.e. producing it, a different set of considerations arise.  There are really two scenarios for producing open source:

  • One can contribute to an existing project, adding value through bug fixes and new functionality (and possibly non-software contributions like documentation and translations).
  • One can start a new open source project, which means organizing the infrastructure, developing the initial software, and providing for the early community.

The motivation in the first case of contributing to an existing open source project is simple.  People generally start using open source software before they become contributors.  People use software because it solves a problem they have.   Once they use the software for a while, they will generally encounter a bug, find a change they want to make, or possibly document a new use case.  If the user is comfortable with making software changes and the project community has done a good job of making it easy to contribute, then contributions can happen.

While it would be easy to simply make the necessary change and ignore the contribution, living on a personal forked copy of the software comes at a cost.  Others' enhancements and bug fixes aren't seen and shared by installing newer versions of the software, and one needs to re-patch the software with one's own changes and fixes if one does try to move to a newer version.  It is far better to contribute one's changes back to the project community if feasible, working with the committers to ensure its contributed correctly and patched into the main development tree. The onus is on the community to make it easy to contribute, but it's on the contributor to contribute correctly.  The cost of living on a fork gets worse over time as the forked branch drifts further away from the mainline development of the project.  It is well worth the investment to contribute.

This brings us to the "making" open source software case of starting one's own project.

First, it all starts with software.  You must consider the software itself around which a project and its community is to be built.  The software must "do" something useful from the beginning.  Open source software developer communities are predominantly a discussion that starts with code, and without the code their is no discussion.  Even when a fledgling community comes together to discuss a problem first with an eye to building the solution together, sooner or later someone needs to commit to writing the first workable building software that will act as a centre of gravity for all other conversations.

If an existing body of software is to be published into an open source community then there needs to be certain considerations with respect to the ownership and licensing.  Software is covered by digital copyright and someone owns that copyright.  To publish existing software requires the owners to agree to the publication and licensing as open source.  The weight of existing code and its cultural history need to be considered, and may effect the early project community.

The crucial question becomes "why" open source?  What motivates the publication of software under an open source license?  Why share the software?  Why choose NOT to commercialize it.  (There are a number of important reasons not to commercialize or keep the software proprietary.)

The economics of collaboratively developing software is compelling.  Writing good software is hard work.  Managing the evolution of software over time is equally hard work.  Sharing good software and collaboratively developing and maintaining it distributes the costs across a group.  Publishing the software as open source, and building a development community (however small) is motivated by a desire to evolve the software and share the value and to be open to the idea that others in the community will join in sharing their domain expertise, learning the software's structure, and sharing the costs of evolution.

The economics are also asymmetric.  For a contributor the contribution may represent a small bit of expertise from the contributor (e.g. a single bug fix or particular application of an algorithm that they personally understood), but the contributor is rewarded with the community investment of the entire package of software at relatively small personal cost.  Likewise, the contribution is valuable to the software's developer (and user) community at large without necessarily carrying the costs of the contributor as a full time member in the developer community.  (Indeed a single contribution may have been the only value the contributor had to give in this instance.)

Motivation to develop an open source community to evolve the software is an essential factor, but so too are knowledge of the problem domain, and the internal knowledge of the software needed to anchor the community.  The essential motivation to share the software as open source supports the commitment and investment to maintain enough domain expertise and software knowledge to keep the community going and growing.  Without all three factors it is difficult for the community to evolve the software and thrive.

One of the first structural considerations needs to be which open source software license to attach to the project.  There are an array of licenses that have been approved by the Open Source Initiative as supporting the Open Source Definition, but there are really just a few that typically need consideration, and we'll discuss those at length in another post.  The important thing to realize when choosing a license is that it doesn't just outline the legal responsibilities for how the software is shared, but it also outlines the social contract for how the community will share.

The next structural consideration for a community is to choose a tool platform to support collaborative development. This is the hub for activity for managing source code versions, distributing built software, handling the lines of communications, and logging issues and bugs. There are a number of free forge sites (e.g., Codeplex, Google Code, GitHub, SourceForge), and the tools all exist as open source themselves if a project wanted to develop and manage its own site.

The last structural consideration involves deciding what sort of community one wants to develop.  What sort of governance will be required and when will certain things need to be instituted.  There are two very good books available in this space:

Contribution is the life blood of an open source software community.  It leads to new developers joining the project and learning enough to becoming committers with the responsibility for the code base and its builds.  Its what makes the shared economic cost work for all.  But as already stated, contributors generally start as users of the software.  This means that a project community hoping to attract contributors first needs to attract users.  The project's initial participants need to build a solid onramp for users that can then become contributors by making the software easy to "use", ensuring it's discoverable, downloadable, easily installable, and quickly configurable.

Not all users will contribute.  Some may never push the software enough to need to make a change.  It simply solves the problems they need to solve.  Of those that contribute, some will contribute in very simple ways, reporting bugs for particular use cases.   Others may contribute more, and this is where the second onramp needs to be developed by the community.  Contributors need to know what sorts of contributions are encouraged, how to contribute, and where to contribute.  If code contributions are to be encouraged, having scripts and notes on building the software and testing the baseline build make it easy for potential contributing developers to get involved.

So building an open source software project follows a pattern:

  • There needs to be useful software, at least a seed around which to build a community.
  • Motivation to share, expertise in the problem to be solved, and an understanding of the software structure will anchor an open source community. The project founder is the starting point for what will hopefully become a community.
  • The project needs to have the structural issues of license, forge, and governance sorted, even if governance becomes an evolving discussion in a growing community.
  • The community needs to build a solid onramp for users, and a second onramp for contributors.  The sooner this happens in a project's life, the faster it can build a community.

One can choose to publish software under an open source license and never build a community.  The software isn't "lost", but neither is it hardened or evolved.  It may be useful to someone that discovers it, but the dynamic aspects of software development are lost to it.  Taking the steps to encourage and build a community around the open source project sets the dynamic software engine in motion and allows the economics of collaborative development and sharing to work at its best.

More Stories By Stephen Walli

Stephen Walli has worked in the IT industry since 1980 as both customer and vendor. He is presently the technical director for the Outercurve Foundation.

Prior to this, he consulted on software business development and open source strategy, often working with partners like Initmarketing and InteropSystems. He organized the agenda, speakers and sponsors for the inaugural Beijing Open Source Software Forum as part of the 2007 Software Innovation Summit in Beijing. The development of the Chinese software market is an area of deep interest for him. He is a board director at eBox, and an advisor at Bitrock, Continuent, Ohloh (acquired by SourceForge in 2009), and TargetSource (each of which represents unique opportunities in the FOSS world). He was also the open-source-strategist-in-residence for Open Tuesday in Finland.

Stephen was Vice-president, Open Source Development Strategy at Optaros, Inc. through its initial 19 months. Prior to that he was a business development manager in the Windows Platform team at Microsoft working on community development, standards, and intellectual property concerns.

@ThingsExpo Stories
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, will discuss the impact of technology on identity. Should we federate, or not? How should identity be secured? Who owns the identity? How is identity ...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new data-driven world, marketplaces reign supreme while interoperability, APIs and applications deliver un...
Electric power utilities face relentless pressure on their financial performance, and reducing distribution grid losses is one of the last untapped opportunities to meet their business goals. Combining IoT-enabled sensors and cloud-based data analytics, utilities now are able to find, quantify and reduce losses faster – and with a smaller IT footprint. Solutions exist using Internet-enabled sensors deployed temporarily at strategic locations within the distribution grid to measure actual line loads.
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
There will be 20 billion IoT devices connected to the Internet soon. What if we could control these devices with our voice, mind, or gestures? What if we could teach these devices how to talk to each other? What if these devices could learn how to interact with us (and each other) to make our lives better? What if Jarvis was real? How can I gain these super powers? In his session at 17th Cloud Expo, Chris Matthieu, co-founder and CTO of Octoblu, will show you!
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...