Linux Containers Authors: Elizabeth White, Sematext Blog, Pat Romanski, Carmen Gonzalez, Andreas Grabner

Related Topics: Linux Containers

Linux Containers: Article

Linus changed his source habits & why it doesn't matter

Developers have three options for advancing the Linux kernel.

(LinuxWorld) -- Linux is in greater demand every day. People want to run Linux on everything from wristwatches to mainframes. Thankfully, the core kernel developers aren't burdened with the responsibility of making Linux run on a wristwatch, but they do have to deal with desktop and server demands. People want the kernel to accommodate more processors, different hardware architectures, more types of I/O, more robust and complex network support and filtering, not to mention emerging hardware peripherals ranging from InfiniBand special-purpose high-speed network adapters to cheap digital cameras.

Talented programmers addressed these issues. They submited patches, which they felt were dropped into a black hole. They felt ignored, and, in some cases, they are right. Some ISVs feel as if the core kernel developers are an exclusive club that resents and rejects contributions from outside the inner circle, especially from organizations where capitalism is involved. I don't know if that's true (I doubt it), but the perception can be destructive.

Many people have offered various suggestions on how to improve the process of accepting and integrating kernel patches. I don't have space to tackle them all, and I doubt if my endorsement would make any difference. I do have a bit of advice for those who have more influence on the process. (If you're interested in reading up on all the ideas being tossed around, visit one of the many Linux kernel mailing list archives. You can find links for two of them at http://www.kernel.org/.)

Free advice for free software

Much of the discussion is politically motivated, and some of the antipathy is based on personality conflicts. Some of the suggestions are purely logistic in nature, for example, that kernel developers should use a more intelligent means of submitting and applying patches than the patch and diff utilities.

It appears Linus Torvalds is doing just that. Earlier this week he announced he started using BitKeeper, a distributed source management system. He wrote in his weekly kernel update, "...I've spent about a week trying to change my working habits and scripting bitkeeper enough to (a) import a good revision controlled tree into it from the 2.4.x and 2.5.x patch-archives and (b) try to actually accept patches directly into bitkeeper."

This is good news. BitKeeper isn't an instant panacea, however. "Quite frankly, so far it has definitely made me slower -- it took me basically a week to get about 50 patches applied, but most of that time by far was writing scripts and just getting used to the thing," Torvalds writes. "I expect to be a bit slower to react to patches for a while yet, until the scripts are better."

As great as BitKeeper might be and as wonderful as Torvalds has proven, we need to keep these two facts in mind:

  • Linus Torvalds is not God
  • The Linux kernel is open source, and it is licensed under the GPL

It is difficult for me to criticize the way Linus Torvalds folded patches into the Linux kernel in the past. Anyone who is tempted to do so should remember how much we Linux users owe to the talent and years of hard work of Torvalds.

Nevertheless, for the sake of argument, allow me to assume the worst. Suppose Linus deliberately ignores patches from some people, and in so doing occasionally works against the best interest of all Linux developers and users. Suppose Linus rejects some patches, not because the patches are without merit but because they come from IBM or HP employees, and he doesn't want those companies to have any influence on the kernel.

Or, how about this? What if BitKeeper is using the worst tool for collecting patches, evaluating them, and applying them? What if BitKeeper is perfectly fine but Torvalds uses it incorrectly? What if he never "gets used to the thing"?

I seriously doubt the situation is nearly as bad as these worst-case scenarios, but it is likely that there is an element of truth to at least some of these claims. Why? Because Linus Torvalds is not God. He has a finite ego, IQ, energy level, attention span, and suffers to some degree from all of the other human limitations you and I share. He is predisposed to handling things one way and not another. It is possible that his way was the best way when the kernel was smaller and more manageable, but it is no longer the best way. In other words, it may be true that regardless of the source management tools he uses, the Linux kernel has outgrown Linus Torvalds.

If so, there is still one more thing you can do. Remember that the Linux kernel is open source.

If you can set up a system that manages the progress of the Linux kernel better than Linus can, then go for it. Linus Torvalds may be able to stop you from calling your kernel "Linux," but he can't stop you from taking the kernel as it exists today and doing a better job advancing it. If you're worried that kernel developers simply won't cooperate with you out of a sense of loyalty to Linus, then keep reminding them that Linus Torvalds is not God. Eventually it will sink in.

Okay, now. Any takers?

If not, then I have one last bit of advice. Keep on making suggestions on how to improve the process. In the meantime, try to work with Linus the way he prefers to do things, whether you think he's right or not.


In case you're wondering, here's how I feel about the current branches of development on the Linux kernel. I haven't been very happy with the Linus-managed 2.5 branch as of late. It usually doesn't even compile on my system. Dave Jones is aggressive about cleaning up the problems with the 2.5 branch and merging fixes and features from other branches. Jones' patches usually compile for me (although they make it difficult, but not impossible, for me to use the NVidia accelerated driver). You have to be pretty daring to mess with any of the 2.5 branches, whether they're from Linus or Dave. I run the unstable branch of Debian, so I tend to take risks in order to learn more about Linux, but I'm not comfortable running any of the 2.5 kernels yet except as a short experiment.

I miss the frequency with which Alan Cox improved and experimented with the 2.4 branch, so I was very happy to see Alan post some patches to the 2.4.18-pre tree. I'm currently running 2.4.18-pre7-ac2, which has been working great.

Whenever Linus reaches a reasonably stable point in his kernel branch, I usually find myself going back to one of his versions. Whether I do so in the future is uncertain, which is as it should be. It all depends on who does it best.

More Stories By Nicholas Petreley

Nicholas Petreley is a computer consultant and author in Asheville, NC.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
SYS-CON Events announced today that Niagara Networks will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
SYS-CON Events announced today that Embotics, the cloud automation company, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Embotics is the cloud automation company for IT organizations and service providers that need to improve provisioning or enable self-service capabilities. With a relentless focus on delivering a premier user experience and unmatched customer support, Embotics is the fas...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
The Quantified Economy represents the total global addressable market (TAM) for IoT that, according to a recent IDC report, will grow to an unprecedented $1.3 trillion by 2019. With this the third wave of the Internet-global proliferation of connected devices, appliances and sensors is poised to take off in 2016. In his session at @ThingsExpo, David McLauchlan, CEO and co-founder of Buddy Platform, discussed how the ability to access and analyze the massive volume of streaming data from millio...
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, will draw together recent research and lessons learned from emerging and established ...
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here