|By Dee-Ann LeBlanc||
|May 12, 2004 12:00 AM EDT||
Most people who know anything about Linux know that the kernel – the core of the operating system, Linux itself really – is developed by Linus Torvalds and a large number of volunteers.
In a nutshell, Linus is the top dog, and the one responsible for guiding the overall process. Beneath him are people responsible for various kernel sections and even versions. One person might be in charge of maintaining a kernel through its production life cycle, such as Andrew Morton preparing to take care of kernel series 2.6. Others are in charge of various platforms (64-bit Sparc, Mac 68K, SGI, etc.). Yet more are in charge of subsystems, such as the layer that handles SCSI hardware operation. It's a sensible top-down approach that has grown from a need to manage a code base of everincreasing complexity in which both work and responsibility are divided among respected members of the community.
And yet, ultimately, anyone can get involved in the Linux kernel development process. You could, for example, assign someone at your company to function as a beta tester for the Linux kernel and the collection of Linux projects and products you use in your business. If having thousands of beta testers all over the world helps to produce top-notch software like we have in the Linux community, then making sure that your own people report problems you experience before taking a new kernel or tool version into a production environment increases the return on your Linux investment.
All those who want to contribute have to do is a bit of homework. A quick visit to the Linux Kernel Mailing List (LKML) FAQ at www.tux.org/lkml helps you understand the main kernel discussion list in all of its glory, and going to www.tux.org/lkml/reporting-bugs.html. html teaches you how to effectively report bugs to the kernel maintainers. Even just testing the experimental kernel tree can be a great help, and you'll learn a ton along the way.
These are open source values. Everyone can contribute, even if they're not a programming guru. But there's a finer point to this as well. To really be helpful to many open source projects, you have to take the time to at least learn some rudimentary ways about their "system." Some have online forums, some have mailing lists, some are just a small Web presence with a single e-mail address where you can write the developer. It all depends on the size of the project and the audience.
The Linux kernel serves as an extreme example. Its mailing list alone is so busy that there are sites such as Kernel Traffic (http://kt.zork.net/kernel-traffic) whose sole purpose is to summarize the information in a useful manner. On top of that, there are millions of users. Even if one-half of one percent of all Linux users sent bug reports to the list or directly to the various maintainers each day, that would be thousands of reports. Hence, a system. This also explains why blundering on without learning the system tends to get people grouchy responses.
Shared values are the glue that holds the open source community together. This is the single biggest thing that many journalists and skeptics still haven't grasped. It's not money, fame, or power. I'm not even entirely convinced it's all about the itch scratching we seem so fond of talking of in open source land, like everyone has fleas.
What are some more of the values that hold us together? Let me use an example to shed some light on the subject.
An Example: The Birth of ext3
Consider this once-contentious issue: adding a default journaling filesystem to Linux. Way back at the turn of the century (early 1999), Linus Torvalds and the gang were working on the 2.3 kernel series, on their way to kernel 2.4. Kernel list participant Alan Curry had been experiencing performance problems on a Linux server handling high traffic. He was able to trace this to a problem with two components: syslogd and fsync().
syslogd is the program that handles recording errors, accesses (such as a piece of mail being sent, or someone requesting a Web page), and more for the various services on many Linux systems. As you might imagine, on an ISP's e-mail server syslogd can grow quite busy. A feature called log rotation prevents individual log files from getting too huge by breaking them into pieces, and creating a new file each time the current file reaches a certain size. Since the files will add up infinitely if left alone, this feature also keeps only a set number of pieces around before either compressing them and farming them off for backup and deletion, or just outright deleting them. The system administrator can either set how often to do this, or put limits on how large to let the individual files grow.
Curry was able to determine that his problem hit whenever a particular log file grew huge, to approximately 36MB. At this stage, the syslogd program would consistently hang – it would stall and stop working – until the logfile was rotated and small once again. Tracing this issue further, he discovered that this was the fault of fsync(), the C programming function that ensures that the data in memory gets properly written into files.
The first suggestions from the kernel mailing list were all workarounds, things that various people would try on their own servers just to keep things moving along. One was to simply rotate the log files more often. That works, of course, but it's not really a solution. Others suggested another approach: disabling syslogd's use of fsync(). Of course, if you do that you may find after a system crash that there's vital data missing from your log files, so that's no good. Right?
Was fsync() needed? A patch was submitted, but the technical solution offered wasn't strong enough for transaction-oriented databases. Debate raged again, with Linus trying to push people toward simpler and simpler solutions rather than letting things get more complex, and therefore more likely to have problems. Extensions to the ext2 filesystem were proposed and Linus Torvalds said no, no, no, and again no.
While Torvalds is revered by many in the Linux community, he receives little special treatment on the kernel development list. Everyone involved in kernel development wants to do the best job possible, which means that discussions – or arguments, which is what this degenerated to for a bit – tend to happen with everyone as peers for the most part. Torvalds might have the last word, but that doesn't mean that people always let a topic drop if they think that there really is something to it.
Apparently, Stephen Tweedie had already started working on such extensions to ext2 in an attempt to quickly answer the need for a journaling filesystem in Linux – something that would definitely address the fsync() problem. This displeased Torvalds to no end, since he didn't want ext2 known as the everchanging filesystem, and pointing out that Tweedie was calling it ext3 did only a little to dull Torvalds' annoyance. Finally, in an exchange that would do an armchair psychologist proud, Alan Cox and Tweedie managed to help steer things to calmer waters.
Once there, the debate continued on just how far this journaling filesystem should go. These discussions – tense or otherwise – are one of the natural ways that innovation is constantly fostered in the Linux community. When the fact of a nextgeneration default filesystem was accepted, all of those little "wish lists" that lurk in the back of the mind started leaking out from all directions. Torvalds himself started this by outlining some of the immediate issues he would love to see dealt with, such as removing "." and ".." from the directory trees. In true open source developer fashion, that comment began a discussion about whether there were enough benefits or too many dangers in doing so.
Somehow, in all of this, the whole issue fell off the radar and folks must have left Tweedie to do his work in peace. Now he was aware of their concerns and wishes, and they simply must have trusted him to offer something to test and pound on when the time came. After all, it's one thing to talk about creating a journaled filesystem for Linux. It's another thing to do it.
ext3: A Work in Progress
A mere two weeks later, Thomas Pornin asked an innocent question about whether BSD-style softupdates were in the works for Linux. This brought up the issue of Tweedie's work on ext3 and an alreadyexisting solution called dtfs (now LinLogFS). A new filesystem permissions model somehow wormed its way into the discussion, sidetracking everything, and then in mid-1999 SGI announced that they were making a version of their own IRIX filesystem into an open source filesystem for Linux – XFS, which is a journaling filesystem.
Was Tweedie's work in vain? (Some would say that such projects are never in vain, since they often reveal issues that people might not otherwise have considered.) This would seem a great time for a cliffhanger, but everyone knows the answer. It was agreed that if XFS was placed under the GPL he might drop ext3. An SGI employee pointed out that XFS had to be partially rewritten to replace code that belonged to other people – and remove patent issues – so XFS wouldn't be ready to be placed into the open source domain any time soon. The folks at SGI didn't even know exactly which license they would choose yet. This put Tweedie's work back into the running, since no one was going to adopt a new default filesystem that wasn't actually written.
Once that furor died down, the fledgling ReiserFS became a serious contender. Timing issues prevented it from being included in the 2.3 kernel stream, and around a month later the issue of ext3 came up once again. By then, ext3 had attained the lofty status of release 0.0.1 with 0.0.2 on the way. Already, at this point, the only difference from a user's point of view between ext2 and ext3 was the journal file. Whether it would remain this way, Tweedie was still not sure.
Where are the values here? Well, for one thing, everyone was working on their own projects. No one committed to which would be the "winner" ahead of time. It might seem a bit backward to those from the commercial world of planning everything out and driving all of your resources into a single project, but in this type of environment, it's acknowledged that there are many valid means to achieving the same end. Filesystem theory is a complex issue. Today, we have journaling filesystems with various strengths and weaknesses to pick and choose from. Some handle tiny files best, some handle huge files best, some are that middle ground that's great in many circumstances.
When asked in January 2000 if LVM and filesystem journaling would be folded into kernel 2.4, whether with ext3 or ReiserFS, the general consensus on the kernel list was no. There were too many issues that needed to be ironed out before Torvalds and others felt ext3 was solid enough for production use. ReiserFS, however, was closer to reaching this point. Neither journaling filesystem ultimately made it into the initial 2.4 release – an interesting fact considering that ext3 is in such heavy use today.
ReiserFS did, however, make it into kernel 2.4.1 in 2001, mostly due to the fact that "of the journaling filesystems it's the only one I know of that is in major real production use already, and has been for some time," according to Torvalds.
XFS was also in heavy testing then, and so was ext3. However, Torvalds has a policy against just integrating anything and everything into the kernel. If a small group of people fully capable of patching the kernel themselves – or building the modules on their own – are the only people interested in a particular area (such as XFS in this case) then he chooses to wait until there is more demand. As far as ext3's demand went, Torvalds said, "I would expect ext3 to be the next filesystem to be integrated, but I would also expect that Red Hat will actually integrate it into their kernel first, and expect me to integrate it into the standard kernel only afterwards."
This little quirk of various distributions using slightly different kernels is another thing that confuses both new users and the businessfolk trying to track which version best suits them. These changes are made due to many factors, anything from developers or users requesting a particular nondefault feature, to a convenience for the distribution's own people. Innovation is continually fostered as the Linux distributions try to identify the very best tools that can help them solidify their positions against other distributions.
The key thing here, really, is that in Linux it is possible to exchange the core of your operating system for a different version. Anyone who doesn't like a distribution's specialized kernel can "simply" grab the source of the main kernel and build a replacement. It's actually not as hard as it sounds, though the process can be intimidating to newcomers.
ext3's Coming Out Party
In mid-2001, Andrew Morton (at the time, the kernel maintainer for ext2, ext3, and network drivers in 2.4) showed up as being involved in ext3. This fact signals that ext3 had been, in essence, escalated to the next level. His posts regarding ext3's status arrived around once a month, suggesting that ext3 was considered mature enough to be under serious consideration for merging into the kernel.
Then, by late September 2001, Morton released a test patch that integrated ext3 into kernel 2.4.09. This was very much a test for those who were brave enough to try it. Morton's announcement included, "This will soon be broken out into a separate patch to make ext3 suitable for submission for the mainstream kernel." In the next week, people started asking again when ext3 would be added, indicating the level of anticipation for those waiting for a journaling filesystem fully compatible with ext2.
Eventually, Alan Cox – the "next level up" maintainer – answered. "When the ext3 folk ask me to merge it," he said. His policy, it appeared, was not to merge patches into his test version of the kernel (known as the -ac tree) until the project's developers asked him to. Sometimes he can be overridden or will decide to make a special case, but typically the developers know exactly where they are when working with the code, and whether trying to merge it at the time would be a disaster or fairly smooth sailing.
So, people waited. Somewhere between then and October 8, 2001, Tweedie and his cohorts must have spoken up. On that day, ext3 was merged into Cox's version of the 2.4.10 kernel. This was the last major testbed. Many people testing new features that they desperately wanted or needed used an -ac kernel on various systems to try to shake out the bugs.
ext3 development still continued, of course. In early November 2001, Morton announced another significant ext3 update. People continued agitating for ext3 to be added in the next kernel version, and the next, and others asked Torvalds to wait until the current "big" problems – 2.4 was actually a pretty stable new release – with the 2.4 kernel were better ironed out.
The next issues that showed up are kind of odd and amusing, and while they aren't about values, are a demonstration of the strange things that can happen when a new technology is introduced. Red Hat added ext3 to 7.2 (as Torvalds predicted). Administrators using Red Hat 7.2 began making strange observations about the filesystem checker running on boot. The strange part was that this isn't necessary with ext3, nor was it the default behavior on a system using ext3. It turned out that, somehow, ext3 was not being properly enabled on those systems. People had been running ext2 all that time, instead. I'm sure this little gaffe was on developers' minds as ext3 came closer to being officially added.
By mid-November, ext3 reached Torvalds' own "test kernel," which means it was added into a "pre" version of the kernel. Using the kernel naming scheme, ext3 was officially added to kernel 2.4.15-pre2, which eventually became 2.4.15-final, which is the same as 2.4.15. There was one ext3 fix added in kernel 2.4.15-pre8, and then only two more tweaks to the fledgling kernel letter, and kernel 2.4.15 was released for production use on November 22, 2001. Of course, development of ext3 didn't stop there either. Since then, Access Control Lists (ACLs) have been incorporated into the filesystem, along with many more features and improvements.
(To give you an overall time line for how long it takes for even minor kernel versions to advance, the current Red Hat Linux beta [Severn] is [at the time of this writing] based on kernel 2.4.21.)
Organic, and Yet Organized
Throughout more than two years of work, many other features were added to the Linux kernel. Others were refined, and some were even removed. Kernel maintainers changed as well, according to both time constraints and interests. Even the process of posting new kernel versions was "upgraded." The team added ChangeLogs – files containing a list of the pertinent changes in each minor code update, including who made the changes – so that people can more easily track what in the heck is going on.
All of this happened in the midst of bug reports and fixes, discussions of the best way to approach upcoming requirements, and more. Ultimately, everything keeps moving. The Linux kernel grows and improves, and all of the bits and pieces find their way to where they need to be.
Ultimately, that is how open source development works. Bringing your own company into this process gives you a number of advantages. If you manufacture hardware, you can either assign someone to the Linux kernel team to produce the Linux drivers for your products, or you can give your product's specifications to someone from the driver community to build the drivers for you. Not only does this guarantee you that Linux users will consider your product, but it's great PR as well. Software companies can become involved in the Linux Standard Base (www.linuxbase.org), develop their products to this specification, and have a Linux beta program to help the community feel involved in the product's development.
If there is one phrase that is true for the Linux and open source communities, it is this: You get out of it what you put into it. Work with the community, maybe even contribute some source code along the way, and you will experience not only a kind of product loyalty that just might astound you, but a stronger product offering as well.
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to impr...
Dec. 3, 2016 02:15 PM EST Reads: 6,943
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Dec. 3, 2016 02:00 PM EST Reads: 438
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Dec. 3, 2016 01:30 PM EST Reads: 2,115
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
Dec. 3, 2016 01:15 PM EST Reads: 581
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 01:00 PM EST Reads: 284
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Dec. 3, 2016 01:00 PM EST Reads: 1,853
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, drew together recent research and lessons learned from emerging and established compa...
Dec. 3, 2016 12:45 PM EST Reads: 727
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Dec. 3, 2016 12:45 PM EST Reads: 1,942
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...
Dec. 3, 2016 12:15 PM EST Reads: 1,493
We are always online. We access our data, our finances, work, and various services on the Internet. But we live in a congested world of information in which the roads were built two decades ago. The quest for better, faster Internet routing has been around for a decade, but nobody solved this problem. We’ve seen band-aid approaches like CDNs that attack a niche's slice of static content part of the Internet, but that’s it. It does not address the dynamic services-based Internet of today. It does...
Dec. 3, 2016 11:30 AM EST Reads: 829
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Dec. 3, 2016 11:30 AM EST Reads: 2,068
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Dec. 3, 2016 11:15 AM EST Reads: 1,628
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Dec. 3, 2016 10:45 AM EST Reads: 791
Major trends and emerging technologies – from virtual reality and IoT, to Big Data and algorithms – are helping organizations innovate in the digital era. However, to create real business value, IT must think beyond the ‘what’ of digital transformation to the ‘how’ to harness emerging trends, innovation and disruption. Architecture is the key that underpins and ties all these efforts together. In the digital age, it’s important to invest in architecture, extend the enterprise footprint to the cl...
Dec. 3, 2016 10:45 AM EST Reads: 2,139
Connected devices and the industrial internet are growing exponentially every year with Cisco expecting 50 billion devices to be in operation by 2020. In this period of growth, location-based insights are becoming invaluable to many businesses as they adopt new connected technologies. Knowing when and where these devices connect from is critical for a number of scenarios in supply chain management, disaster management, emergency response, M2M, location marketing and more. In his session at @Th...
Dec. 3, 2016 09:30 AM EST Reads: 3,929
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 09:30 AM EST Reads: 832
What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ...
Dec. 3, 2016 09:00 AM EST Reads: 454
"We're a cybersecurity firm that specializes in engineering security solutions both at the software and hardware level. Security cannot be an after-the-fact afterthought, which is what it's become," stated Richard Blech, Chief Executive Officer at Secure Channels, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 08:30 AM EST Reads: 501
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
Dec. 3, 2016 06:30 AM EST Reads: 6,010
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
Dec. 3, 2016 05:45 AM EST Reads: 6,929