|By Dee-Ann LeBlanc||
|May 12, 2004 12:00 AM EDT||
Most people who know anything about Linux know that the kernel – the core of the operating system, Linux itself really – is developed by Linus Torvalds and a large number of volunteers.
In a nutshell, Linus is the top dog, and the one responsible for guiding the overall process. Beneath him are people responsible for various kernel sections and even versions. One person might be in charge of maintaining a kernel through its production life cycle, such as Andrew Morton preparing to take care of kernel series 2.6. Others are in charge of various platforms (64-bit Sparc, Mac 68K, SGI, etc.). Yet more are in charge of subsystems, such as the layer that handles SCSI hardware operation. It's a sensible top-down approach that has grown from a need to manage a code base of everincreasing complexity in which both work and responsibility are divided among respected members of the community.
And yet, ultimately, anyone can get involved in the Linux kernel development process. You could, for example, assign someone at your company to function as a beta tester for the Linux kernel and the collection of Linux projects and products you use in your business. If having thousands of beta testers all over the world helps to produce top-notch software like we have in the Linux community, then making sure that your own people report problems you experience before taking a new kernel or tool version into a production environment increases the return on your Linux investment.
All those who want to contribute have to do is a bit of homework. A quick visit to the Linux Kernel Mailing List (LKML) FAQ at www.tux.org/lkml helps you understand the main kernel discussion list in all of its glory, and going to www.tux.org/lkml/reporting-bugs.html. html teaches you how to effectively report bugs to the kernel maintainers. Even just testing the experimental kernel tree can be a great help, and you'll learn a ton along the way.
These are open source values. Everyone can contribute, even if they're not a programming guru. But there's a finer point to this as well. To really be helpful to many open source projects, you have to take the time to at least learn some rudimentary ways about their "system." Some have online forums, some have mailing lists, some are just a small Web presence with a single e-mail address where you can write the developer. It all depends on the size of the project and the audience.
The Linux kernel serves as an extreme example. Its mailing list alone is so busy that there are sites such as Kernel Traffic (http://kt.zork.net/kernel-traffic) whose sole purpose is to summarize the information in a useful manner. On top of that, there are millions of users. Even if one-half of one percent of all Linux users sent bug reports to the list or directly to the various maintainers each day, that would be thousands of reports. Hence, a system. This also explains why blundering on without learning the system tends to get people grouchy responses.
Shared values are the glue that holds the open source community together. This is the single biggest thing that many journalists and skeptics still haven't grasped. It's not money, fame, or power. I'm not even entirely convinced it's all about the itch scratching we seem so fond of talking of in open source land, like everyone has fleas.
What are some more of the values that hold us together? Let me use an example to shed some light on the subject.
An Example: The Birth of ext3
Consider this once-contentious issue: adding a default journaling filesystem to Linux. Way back at the turn of the century (early 1999), Linus Torvalds and the gang were working on the 2.3 kernel series, on their way to kernel 2.4. Kernel list participant Alan Curry had been experiencing performance problems on a Linux server handling high traffic. He was able to trace this to a problem with two components: syslogd and fsync().
syslogd is the program that handles recording errors, accesses (such as a piece of mail being sent, or someone requesting a Web page), and more for the various services on many Linux systems. As you might imagine, on an ISP's e-mail server syslogd can grow quite busy. A feature called log rotation prevents individual log files from getting too huge by breaking them into pieces, and creating a new file each time the current file reaches a certain size. Since the files will add up infinitely if left alone, this feature also keeps only a set number of pieces around before either compressing them and farming them off for backup and deletion, or just outright deleting them. The system administrator can either set how often to do this, or put limits on how large to let the individual files grow.
Curry was able to determine that his problem hit whenever a particular log file grew huge, to approximately 36MB. At this stage, the syslogd program would consistently hang – it would stall and stop working – until the logfile was rotated and small once again. Tracing this issue further, he discovered that this was the fault of fsync(), the C programming function that ensures that the data in memory gets properly written into files.
The first suggestions from the kernel mailing list were all workarounds, things that various people would try on their own servers just to keep things moving along. One was to simply rotate the log files more often. That works, of course, but it's not really a solution. Others suggested another approach: disabling syslogd's use of fsync(). Of course, if you do that you may find after a system crash that there's vital data missing from your log files, so that's no good. Right?
Was fsync() needed? A patch was submitted, but the technical solution offered wasn't strong enough for transaction-oriented databases. Debate raged again, with Linus trying to push people toward simpler and simpler solutions rather than letting things get more complex, and therefore more likely to have problems. Extensions to the ext2 filesystem were proposed and Linus Torvalds said no, no, no, and again no.
While Torvalds is revered by many in the Linux community, he receives little special treatment on the kernel development list. Everyone involved in kernel development wants to do the best job possible, which means that discussions – or arguments, which is what this degenerated to for a bit – tend to happen with everyone as peers for the most part. Torvalds might have the last word, but that doesn't mean that people always let a topic drop if they think that there really is something to it.
Apparently, Stephen Tweedie had already started working on such extensions to ext2 in an attempt to quickly answer the need for a journaling filesystem in Linux – something that would definitely address the fsync() problem. This displeased Torvalds to no end, since he didn't want ext2 known as the everchanging filesystem, and pointing out that Tweedie was calling it ext3 did only a little to dull Torvalds' annoyance. Finally, in an exchange that would do an armchair psychologist proud, Alan Cox and Tweedie managed to help steer things to calmer waters.
Once there, the debate continued on just how far this journaling filesystem should go. These discussions – tense or otherwise – are one of the natural ways that innovation is constantly fostered in the Linux community. When the fact of a nextgeneration default filesystem was accepted, all of those little "wish lists" that lurk in the back of the mind started leaking out from all directions. Torvalds himself started this by outlining some of the immediate issues he would love to see dealt with, such as removing "." and ".." from the directory trees. In true open source developer fashion, that comment began a discussion about whether there were enough benefits or too many dangers in doing so.
Somehow, in all of this, the whole issue fell off the radar and folks must have left Tweedie to do his work in peace. Now he was aware of their concerns and wishes, and they simply must have trusted him to offer something to test and pound on when the time came. After all, it's one thing to talk about creating a journaled filesystem for Linux. It's another thing to do it.
ext3: A Work in Progress
A mere two weeks later, Thomas Pornin asked an innocent question about whether BSD-style softupdates were in the works for Linux. This brought up the issue of Tweedie's work on ext3 and an alreadyexisting solution called dtfs (now LinLogFS). A new filesystem permissions model somehow wormed its way into the discussion, sidetracking everything, and then in mid-1999 SGI announced that they were making a version of their own IRIX filesystem into an open source filesystem for Linux – XFS, which is a journaling filesystem.
Was Tweedie's work in vain? (Some would say that such projects are never in vain, since they often reveal issues that people might not otherwise have considered.) This would seem a great time for a cliffhanger, but everyone knows the answer. It was agreed that if XFS was placed under the GPL he might drop ext3. An SGI employee pointed out that XFS had to be partially rewritten to replace code that belonged to other people – and remove patent issues – so XFS wouldn't be ready to be placed into the open source domain any time soon. The folks at SGI didn't even know exactly which license they would choose yet. This put Tweedie's work back into the running, since no one was going to adopt a new default filesystem that wasn't actually written.
Once that furor died down, the fledgling ReiserFS became a serious contender. Timing issues prevented it from being included in the 2.3 kernel stream, and around a month later the issue of ext3 came up once again. By then, ext3 had attained the lofty status of release 0.0.1 with 0.0.2 on the way. Already, at this point, the only difference from a user's point of view between ext2 and ext3 was the journal file. Whether it would remain this way, Tweedie was still not sure.
Where are the values here? Well, for one thing, everyone was working on their own projects. No one committed to which would be the "winner" ahead of time. It might seem a bit backward to those from the commercial world of planning everything out and driving all of your resources into a single project, but in this type of environment, it's acknowledged that there are many valid means to achieving the same end. Filesystem theory is a complex issue. Today, we have journaling filesystems with various strengths and weaknesses to pick and choose from. Some handle tiny files best, some handle huge files best, some are that middle ground that's great in many circumstances.
When asked in January 2000 if LVM and filesystem journaling would be folded into kernel 2.4, whether with ext3 or ReiserFS, the general consensus on the kernel list was no. There were too many issues that needed to be ironed out before Torvalds and others felt ext3 was solid enough for production use. ReiserFS, however, was closer to reaching this point. Neither journaling filesystem ultimately made it into the initial 2.4 release – an interesting fact considering that ext3 is in such heavy use today.
ReiserFS did, however, make it into kernel 2.4.1 in 2001, mostly due to the fact that "of the journaling filesystems it's the only one I know of that is in major real production use already, and has been for some time," according to Torvalds.
XFS was also in heavy testing then, and so was ext3. However, Torvalds has a policy against just integrating anything and everything into the kernel. If a small group of people fully capable of patching the kernel themselves – or building the modules on their own – are the only people interested in a particular area (such as XFS in this case) then he chooses to wait until there is more demand. As far as ext3's demand went, Torvalds said, "I would expect ext3 to be the next filesystem to be integrated, but I would also expect that Red Hat will actually integrate it into their kernel first, and expect me to integrate it into the standard kernel only afterwards."
This little quirk of various distributions using slightly different kernels is another thing that confuses both new users and the businessfolk trying to track which version best suits them. These changes are made due to many factors, anything from developers or users requesting a particular nondefault feature, to a convenience for the distribution's own people. Innovation is continually fostered as the Linux distributions try to identify the very best tools that can help them solidify their positions against other distributions.
The key thing here, really, is that in Linux it is possible to exchange the core of your operating system for a different version. Anyone who doesn't like a distribution's specialized kernel can "simply" grab the source of the main kernel and build a replacement. It's actually not as hard as it sounds, though the process can be intimidating to newcomers.
ext3's Coming Out Party
In mid-2001, Andrew Morton (at the time, the kernel maintainer for ext2, ext3, and network drivers in 2.4) showed up as being involved in ext3. This fact signals that ext3 had been, in essence, escalated to the next level. His posts regarding ext3's status arrived around once a month, suggesting that ext3 was considered mature enough to be under serious consideration for merging into the kernel.
Then, by late September 2001, Morton released a test patch that integrated ext3 into kernel 2.4.09. This was very much a test for those who were brave enough to try it. Morton's announcement included, "This will soon be broken out into a separate patch to make ext3 suitable for submission for the mainstream kernel." In the next week, people started asking again when ext3 would be added, indicating the level of anticipation for those waiting for a journaling filesystem fully compatible with ext2.
Eventually, Alan Cox – the "next level up" maintainer – answered. "When the ext3 folk ask me to merge it," he said. His policy, it appeared, was not to merge patches into his test version of the kernel (known as the -ac tree) until the project's developers asked him to. Sometimes he can be overridden or will decide to make a special case, but typically the developers know exactly where they are when working with the code, and whether trying to merge it at the time would be a disaster or fairly smooth sailing.
So, people waited. Somewhere between then and October 8, 2001, Tweedie and his cohorts must have spoken up. On that day, ext3 was merged into Cox's version of the 2.4.10 kernel. This was the last major testbed. Many people testing new features that they desperately wanted or needed used an -ac kernel on various systems to try to shake out the bugs.
ext3 development still continued, of course. In early November 2001, Morton announced another significant ext3 update. People continued agitating for ext3 to be added in the next kernel version, and the next, and others asked Torvalds to wait until the current "big" problems – 2.4 was actually a pretty stable new release – with the 2.4 kernel were better ironed out.
The next issues that showed up are kind of odd and amusing, and while they aren't about values, are a demonstration of the strange things that can happen when a new technology is introduced. Red Hat added ext3 to 7.2 (as Torvalds predicted). Administrators using Red Hat 7.2 began making strange observations about the filesystem checker running on boot. The strange part was that this isn't necessary with ext3, nor was it the default behavior on a system using ext3. It turned out that, somehow, ext3 was not being properly enabled on those systems. People had been running ext2 all that time, instead. I'm sure this little gaffe was on developers' minds as ext3 came closer to being officially added.
By mid-November, ext3 reached Torvalds' own "test kernel," which means it was added into a "pre" version of the kernel. Using the kernel naming scheme, ext3 was officially added to kernel 2.4.15-pre2, which eventually became 2.4.15-final, which is the same as 2.4.15. There was one ext3 fix added in kernel 2.4.15-pre8, and then only two more tweaks to the fledgling kernel letter, and kernel 2.4.15 was released for production use on November 22, 2001. Of course, development of ext3 didn't stop there either. Since then, Access Control Lists (ACLs) have been incorporated into the filesystem, along with many more features and improvements.
(To give you an overall time line for how long it takes for even minor kernel versions to advance, the current Red Hat Linux beta [Severn] is [at the time of this writing] based on kernel 2.4.21.)
Organic, and Yet Organized
Throughout more than two years of work, many other features were added to the Linux kernel. Others were refined, and some were even removed. Kernel maintainers changed as well, according to both time constraints and interests. Even the process of posting new kernel versions was "upgraded." The team added ChangeLogs – files containing a list of the pertinent changes in each minor code update, including who made the changes – so that people can more easily track what in the heck is going on.
All of this happened in the midst of bug reports and fixes, discussions of the best way to approach upcoming requirements, and more. Ultimately, everything keeps moving. The Linux kernel grows and improves, and all of the bits and pieces find their way to where they need to be.
Ultimately, that is how open source development works. Bringing your own company into this process gives you a number of advantages. If you manufacture hardware, you can either assign someone to the Linux kernel team to produce the Linux drivers for your products, or you can give your product's specifications to someone from the driver community to build the drivers for you. Not only does this guarantee you that Linux users will consider your product, but it's great PR as well. Software companies can become involved in the Linux Standard Base (www.linuxbase.org), develop their products to this specification, and have a Linux beta program to help the community feel involved in the product's development.
If there is one phrase that is true for the Linux and open source communities, it is this: You get out of it what you put into it. Work with the community, maybe even contribute some source code along the way, and you will experience not only a kind of product loyalty that just might astound you, but a stronger product offering as well.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Mar. 26, 2017 12:30 AM EDT Reads: 1,794
What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information,
Mar. 26, 2017 12:15 AM EDT Reads: 693
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, will present an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He will expound on the industry issues he frequently came up against as an analyst, and...
Mar. 26, 2017 12:00 AM EDT Reads: 4,079
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Mar. 26, 2017 12:00 AM EDT Reads: 1,672
Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent...
Mar. 25, 2017 08:45 PM EDT Reads: 5,914
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Mar. 25, 2017 08:45 PM EDT Reads: 2,739
Things are changing so quickly in IoT that it would take a wizard to predict which ecosystem will gain the most traction. In order for IoT to reach its potential, smart devices must be able to work together. Today, there are a slew of interoperability standards being promoted by big names to make this happen: HomeKit, Brillo and Alljoyn. In his session at @ThingsExpo, Adam Justice, vice president and general manager of Grid Connect, will review what happens when smart devices don’t work togethe...
Mar. 25, 2017 06:15 PM EDT Reads: 2,554
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Mar. 25, 2017 05:15 PM EDT Reads: 1,922
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), will provide an overview of various initiatives to certifiy the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldw...
Mar. 25, 2017 04:00 PM EDT Reads: 486
SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
Mar. 25, 2017 01:45 PM EDT Reads: 3,281
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
Mar. 25, 2017 01:30 PM EDT Reads: 1,697
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
Mar. 25, 2017 01:15 PM EDT Reads: 2,036
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
Mar. 25, 2017 12:45 PM EDT Reads: 1,872
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Mar. 25, 2017 12:30 PM EDT Reads: 5,057
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
Mar. 25, 2017 12:30 PM EDT Reads: 1,166
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
Mar. 25, 2017 12:00 PM EDT Reads: 929
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
Mar. 25, 2017 11:15 AM EDT Reads: 1,528
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 20th International Cloud Expo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buyers...
Mar. 25, 2017 11:00 AM EDT Reads: 3,550
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
Mar. 25, 2017 10:45 AM EDT Reads: 2,085
SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
Mar. 25, 2017 10:00 AM EDT Reads: 2,923