Welcome!

Linux Containers Authors: Yeshim Deniz, Liz McMillan, Zakia Bouachraoui, Elizabeth White, Pat Romanski

Related Topics: Linux Containers

Linux Containers: Article

Does an organization have anything to gain from .Net?

Microsoft's latest technology initiative is surrounded by questions and loaded with risks

(LinuxWorld) — .Net is Microsoft's latest technology initiative. To date, the measurable deliveries include a marketing blitz, beta versions of new technologies and a repackaging of existing technologies. Unfortunately, a clear picture of what .Net is and where it will lead has yet to emerge. Even in this uncertain environment, organizations are starting to implement projects based in .Net.

Organizations must weigh the pluses and minuses of using any technology, as the stakes are very high. Selecting the wrong technology at the wrong time will cause a product to fail and may sink an entire organization. Conversely, failing to add compelling new features that are enabled by a new technology can put a company at an extraordinary market disadvantage and be equally deadly.

This article begins by discussing the role of risk-assessment when selecting technologies. It then identifies the risks a company will face using .Net by looking at other technology initiatives in Microsoft's past and predicting the near-term future. Using this information, a rational evaluation of .Net's role in an organization is examined. The article will also discuss whether or not there are less-risky solutions available.

Risk-based technology decisions

Software organizations survive only by identifying risks in a technology and finding ways to avoid or mitigate that risk through research and prototyping. This mitigation is usually expensive, and the benefit to customers may be negligible.

Hence, blindly using any technology is irresponsible; technologies must always provide a clear customer benefit that outweighs any risk. The decision of which technology or technologies to use falls squarely on management. However, management must make these decisions only after performing a full risk-assessment based on sound engineering principles and measurable information.

One common mistake organizations make is to agonize over the selection of a particular technology under the assumption that the decision is final. This strategy rarely works, as the rate at which technologies evolve makes a correct final decision highly unlikely.

Strategically, companies must plan essential product features based on existing, proven technologies. Optional features and product differentiation are planned assuming a continuing evolution of existing technologies and are added as those technologies become viable.

Generally, it is only possible to predict a technologies future six to 12 months down the line — the length of a typical product-release cycle. Limiting our window of evaluation to the next year allows us to make rational decisions based on what is likely to be delivered in .Net over the next year.

Plotting a course with crystal ball in hand

Alert organizations can weigh the risks and advantages of any technology and create the appropriate feature-development road maps. The only tricky part is predicting how an existing technology will evolve or how a new technology will perform in the near future.

There is an old maxim in diplomacy that says to tell what a country will do in the future, you only have to examine its past. The same is true for a company or industry.

Essentially, a company can change its public face with the creation of a new motto. However, the bureaucratic structures within the company are much slower to change. Corporate culture also prevents changes at the lowest levels, even if management makes a concerted effort to replace those strictures. As my father used to say, "A leopard cannot change the color of its spots."

A dispassionate examination of previous promises and the resulting deliveries or Microsoft technological initiatives can illuminate the road ahead. The next section breaks the risks of developing with .Net into six categories. Each area is then compared to previous, similar technologies at Microsoft and in the industry to look for clues as to how these technologies will be realized. Finally, other less-risky alternatives that provide the same behavior that is promised with .Net are examined.

Risks, risks and more risks

An examination of how projects fail illuminates several categories of technology risk. Below, each category gives a brief description of the category, then looks to see how other Microsoft technologies have handled the same risk. Finally, mitigation strategies are offered for each.

.Net increases risk because most of its new technologies are designed to be used over the Internet for corporate infrastructure. Technologies that are limited to clients may inconvenience users, but they rarely threaten an entire organization. On the other hand, failure of corporate infrastructure can jeopardize revenues and destroy enterprise data. The following categories were selected with the needs of corporate infrastructures in mind.

SUBHEAD2: 1. Risk of poor definition and moving targets

Developers rely on stable application-programming interfaces (APIs) and system services when creating applications. APIs are the interfaces the developer uses to access system services. If these change, then the programmer has to rewrite entire sections of an application.

System services provide needed access to the operating system and hardware. For example, system services allow windows to be drawn on the screen and applications to communicate over the Internet. If system services change during development, then bugs will appear in previously working applications.

.Net is ill-defined. Microsoft executives and press releases are often conflicted over the basic definition of what technologies are in .Net. [1][2] Many reviews describe it by looking at 'competing' technologies such as Java, a programming language, EJB, a middleware specification, and CORBA, an RPC mechanism. These are three divergent technologies and do not describe what .Net could possibly be.

InformationIT speculates that the effort also involves Microsoft's desire to move to Web-based delivery of applications. It's ironic that this is precisely the model that Sun, Oracle and IBM described in their "Network Computer" initiative, which was openly mocked by Bill Gates [6].

Redmond Communications' Directions on Microsoft blames the confusion on the fact that .Net has three components:

  1. A corporate strategy that emphasizes Web services on its own servers
  2. A platform with technologies such as C#, a cross language VM and SOAP
  3. A series of hosted services, such as Passport and MyServices [3].

These three components seem to have little synergy and, in many ways, are based on divergent technologies. The schizophrenic definition of .Net is easily explained by looking at previous Microsoft technology initiatives such as NetMeeting, Windows Everywhere, ActiveX, OS/2 and Zero Administration for Windows (ZAW).

The marketing efforts promised extraordinary possibilities for each initiative, and each one failed to meet the promises. Some, such as OS/2, Windows Everywhere and ZAW, were dumped completely, leaving ISV's in the lurch.

After years of further development, other initiatives eventually delivered products that were marginally functional (DirectX, Windows CE, and even the original Windows product line, which was almost unusable for the first three years of its existence [7]). Major rewrites of earlier products were forced as Microsoft changed APIs and services.

Each of these initiatives was designed to head off a specific competitor by leveraging Microsoft's existing markets: Windows CE vs. Palm, Windows vs. TopView and ActiveX vs. Java. The goals of the products were not defined by customer needs nor what was technically feasible, but by marketers. This definition often led to poorly conceived and marginally functional products.

In this context, .Net makes perfect sense. Microsoft is trying to head off two very serious threats to its existence. First, Microsoft has been losing the server battle on the Internet to Unix-derived operating systems such as Linux, Solaris, and BSD. Second, Microsoft wants to establish a revenue model based on Internet transactions to sure-up dwindling revenue streams as companies and individuals decide upgrades are no longer necessary.

Products defined by marketing organizations without regard to customer needs or technical realities pose two serious risks to software organizations. First, ill-defined components tend to change very rapidly as the needs of customers become known and technological limitations are realized. For example, DirectX has been through nine major API revisions in six years, forcing major rewrites on subsequent releases.

Second, the lack of existing customer needs makes it impossible for Microsoft to verify the services provided by these components until they see wide use in real-world applications. Organizations that use these components will be performing field tests on them.

These risks can mitigated by realizing that most of the .Net initiative is a repackaging of existing Microsoft technologies, such as VB, MTS, and COM+. These existing technologies will tend to be more stable. New technologies, such as C#, the .Net Virtual Machine (VM) and the Web-service components, will represent a much higher risk until they are widely deployed.

Organizations may also experience a significant disruption in their current business model. .Net's goals of transaction-based revenues and server dominance will affect future operational and profitability models though licensing fees from software rental models, forced upgrades and transaction fees for accessing .Net services. The more successful these goals, the greater the disruption of current models.

SUBHEAD2: 2. Security risks

Web-based applications must be very secure. The recent spate of viruses shutting down entire organizations, affecting ATMs and airline-flight systems proves that security must be a primary concern. For every virus we know about, one has to wonder what the ones we haven't detected are doing.

Historically, Microsoft has a questionable security record: 54 percent of all attacks on less than 20 percent of Internet systems, making it ten times more-vulnerable than Unix systems [7]. IIS is so vulnerable to attack that Gartner Group recommends organizations abandon its use until a complete redesign is performed [8]. These statistics do not include client-based viruses, which are more common by far.

There are two primary reasons that Microsoft is vulnerable to attacks. First, most non-Microsoft operating systems are made secure by limiting the number of services that are exposed to attack. For example, a server designed to serve Web pages will have all components shut off except the Web-server itself. However, Microsoft's default installation exposes many services to the Internet. Administrators are often unaware of their presence. This problem is exacerbated by both the opaque nature and complexity of Microsoft's server offerings and by the constant upgrade cycles that can add new components and services. Few system administrators fully understand the implications of running all services and exposed interfaces on Windows servers.

As a stark example, the latest attack that shut down ATMs and wreaked havoc was on SQL Server, which may be installed with many tools without the administrator's knowledge [9]. It is troubling and baffling to imagine why SQL Server could ever unknowingly be exposed on the open Internet.

The second reason is Microsoft's basic architecture. An article published in 1996 on ActiveX in Redmond Communications found a series of fundamental security issues in ActiveX controls that would lay machines vulnerable [10]. So far, none of these problems have been addressed, and, indeed, many attacks on Microsoft operating systems have been through ActiveX controls.

Risks associated with server-security are very hard to mitigate. Third-party firewall products offer clients some protection — just as long as well-known security-challenged programs such as Kazaa, Outlook, and IE are prohibited. However, Internet servers are always vulnerable.

Unfortunately, there is no easy solution. Last year, Bill Gates announced that Microsoft would focus on security with its vaguely defined "Trustworthy Computing" initiative [11] [12]. The goals of this project include many features not having to do with security, such as reliability.

Generally, the only way to secure a system is to build security in from the first day, with every line of code examined and every exposed service evaluated. Every exposed service represents a security hole and raises the bar for developers. Minimal installations ensure locked-down systems.

Given these realities, truly securing Microsoft's product offerings will require a ground-up rewrite. This is what Gartner's report argued. More ominous for Microsoft, the business model advocated by Bill Gates in his book The Road Ahead, namely "one-size-fits-all" OS-development, is fundamentally contradictory to securing a system.

For example, the graphics and multimedia requirements for compelling game-play demand fast access to data. These features generally require access to hardware and memory, inevitably exposing a machine to attack. Windows NT has a very secure system architecture; as a result, it cannot run most of the more-compelling multimedia features in DirectX. As Microsoft added DirectX to WindowsXP, exceptions to the security model had to be made and security holes were opened.

One method of securing a system is to install all applications in a "sandbox". The sandbox ensures that no application can access unauthorized data. For example, Java is built on a sandbox and has been virtually impervious to viruses that violate system integrity.

.Net's new language, C#, has the possibility of being sandboxed. Unfortunately, all existing services are accessed through ActiveX, making it vulnerable to the same type of security bugs found in Microsoft's other products.

Organizations with business models that require security have many available options. With regard to operating systems, open-source alternatives such as Linux and BSD have proven to be much more secure and stable. There is impassioned debate as to whether open-source software is inherently more secure [18] [19]. However, there are three factors that make open-source software less vulnerable to attack. First, the transparent implementations allow good administrators to know precisely which services are running and, as an extension of this transparency, which ones may be exposing an organization's vulnerabilities. Second, the implementations of most basic services, such as HTTP, SSL, and FTP, are well-known and have been examined by thousands of programmers with diverse experience. Finally, if a vulnerability is found, the large, capable community generally allows patches to be released immediately after detection and immediate confirmation of the patch by literally thousands of developers.

3. The viability of Web services as a business model

.Net allows "services" to be placed on the Internet. These are then sewn together to make new applications. For example, Web sites may use Microsoft's Passport service to authenticate a user's identity.

The current hype describes Web services as the new money-maker on the Internet. Unfortunately, the world is littered with models where components were designed to be created and shared, such as OLE, COM, CORBA, and EJB. The failure of each of these efforts to create widely distributed components that are anything more than low-quality UI widgets speaks to the complexity of the problem space.

Simple services, such as user authentication, can offer a compelling service-based argument. However, one quickly realizes that even this simple problem can get quite complex depending on the application. For example, identifying an individual in order to access a Web site is rather trivial, but would you use the same procedure and security level to allow an individual to make a bank transfer? The business rules for each application are so different that it's difficult to imagine one single service that can perform both tasks.

So what does .Net bring to the table that other component models are missing? Microsoft has made it easier for these services to communicate through Extensible Markup Language (XML). XML has three main components:

  1. a simple mechanism for specifying field names and simple relationships called a data schema,
  2. the ability to transfer data in that schema and
  3. the ability to verify the schema through a DTD to make sure the sent information is valid.
Microsoft's .Net products use a protocol named SOAP (Simple Object Access Protocol), which works over XML.

The interapplication communication space that SOAP addresses is well-known, with mature technologies such as IBM's SOM, CORBA and IIOP already established. Indeed, SOAP is missing many of the features of these older technologies, such as data safety, type-mapping rules and message-recovery rules.

It is difficult to understand what SOAP on XML adds to existing technologies. XML also suffers from what could be devastating deficiencies, such as up to 60 percent overhead due to tag names and very high resource requirements to parse incoming messages. These problems may not be too important for clients with simple data streams but could be overwhelming in real-world server applications. Most important, although many SOAP servers exist, the performance, reliability and long-term stability of these servers in the real world has not been demonstrated.

There is no evidence that Web services are a viable business model that can generate significant revenues. More importantly, even if they are, there is little evidence that SOAP and XML are appropriate implementation mechanisms.

Given Microsoft's history with interoperability, it is likely that it will expose APIs to the external world via XML but use internal, undocumented APIs to allow faster, more-efficient communication between its internal products. An example of this technique is found in Microsoft's broken implementation of HTTP on IIS so that Web pages will load much faster on Internet Explorer.

This does not mean that companies shouldn't implement interoperable Web services. There is already a large number of alternatives for communicating between applications on the Internet. Open-source examples include Apache SOAP, Apache XML-RPC, and even MICO, an open-source CORBA implementation.

SUBHEAD2: 4. The risk of Microsoft platform support

There is a common myth that because 90 percent of users have Windows, a homogenous market exists. Indeed, there are hundreds of variants of Windows in current use — Windows 95, Windows 98, Windows 98 with plus pack, Windows ME, Windows NT, Windows 2000, Windows 2000 Server, etc. Worse, each may have any number of fix-packs applied. Finally, upgrades to Office and IE surreptitiously upgrade the operating systems, creating a combinatory explosion of varieties. Each variety behaves differently and may not support new applications. Microsoft has also been releasing other operating systems that are completely incompatible with existing products, such as WindowsCE, X-Box, and Windows2K Micro Edition.

With each release, Microsoft has promised that applications on existing platforms would run without modification on the new platforms. It was actually promised that all games that could run on Windows would also run on the X-Box. Of course, even if the operating systems were not completely incompatible, the display and input technologies would make it a pipe dream. The reality is that programs must be substantially rewritten for each upgrade or, at the very least, tested on all variants.

To get a jump start on the competition, Microsoft exacerbates the differences between versions of its operating systems by releasing APIs and technologies before they have a chance to "settle down." Each subsequent version is then released with a fix pack or a newer version of Office or IE. For example, DirectX had to go through many releases before it became a viable platform for game-development, and it is nearly impossible for developers to tell which version will be on a target machine. Developers then download the required version, but it is always uncertain how it will interact with the other components installed on the machine.

Ironically, writing to Java protects developers from the vagaries of platform discrepancies. The "anywhere" in "write once, run anywhere" applies to the various versions of Microsoft products also. Some very compelling applications have been released in Java on MS without the usual porting problems.

Early adopters risk not knowing which .Net services will work on which platforms. Certainly the heavy resources required by XML will make it absolutely inappropriate for handheld devices for the foreseeable future. It is crucial to remember that, for the most part, new technologies will preferentially work on the newest versions of Microsoft operating systems — which represent the vast minority of deployed systems.

These uncertainties make it difficult to predict market share and the number of potential users. It also tends to increase support costs, as technical support has to figure out exactly which operating system configuration a user has before diagnosing a problem. Risk is somewhat mitigated for an organization that can ensure every client is running precisely the same version of Windows.

Organizations can avoid the Windows-version risk by developing with languages that run well on most versions of Windows. Examples include scripting languages such as Python and Perl. For more-sophisticated applications, Java offers the best cross-platform library support.

SUBHEAD2: 5. Performance

Performance is a difficult issue because you never know if it is adequate until it is too late. Systems in production fail just as the service becomes popular. It is equally devastating to buy enough hardware to support all eventualities; not only are the costs prohibitive, but waiting six months to make a purchase of new hardware may reduce the costs by a factor of four.

One way to predict the required hardware is to use benchmarks. Unfortunately, it is difficult to evaluate benchmarks for well-defined APIs such as EJB. It took years and consistent standards for the database industry to come up with meaningful benchmarks. Sun is currently trying to create meaningful benchmarks for its EJB products.

It is impossible to have realistic benchmarks for things as amorphous and ill-defined as Web services and .Net. In addition, Microsoft prohibits the publication of benchmarks in many of its licensing agreements.

Excite Clubs is an example of a proper migration strategy and how benchmarks may be wildly misleading. Microsoft often publishes that ASP is faster than JSP. Excite Clubs was a very high volume (more than 20 million pageviews per day) site that was originally written in ASP on Windows NT.

The original architecture used 20 quad-processor Compaq servers with 2 gigabytes of RAM, ASP, and a Java business layer. This architecture supported 1 million pageviews per day at 80 percent CPU usage with an average response time of one second. It required one full-time system administrator, and the machines had to be rebooted two or three times per day.

The system was ported directly to JSP, and the Java ran unmodified on 16 twin-processor Sun Solaris machines. These machines were of similar cost to the original Compaq servers. The result was a system that handled 20 million page views per day with a response time of 0.01 seconds and less than 10 percent CPU usage. These numbers equate to a hundred-fold increase in performance. In addition, we replaced the full-time administrator with one who was also servicing 100 other machines.

The difference was so startling that it's difficult to imagine how any published benchmarks could have even reasonably similar performance. It is often said that there are lies, damned lies and benchmarks. For example, a recent article contended that .Net performed better than J2EE on the example pet-store application that was delivered with J2EE. It was later revealed that these performance tests were based on an extremely flawed methodology, and the results were essentially meaningless [13] [14].

The requirements for mitigating performance risks are threefold. First, without requiring purchase of too much hardware too soon, the chosen architecture must be easily scalable with regards to CPU usage, disk space and bandwidth. The ability to switch hardware and operating-system vendors is essential to truly mitigating this risk.

Second, organizations should never lock themselves into a single application-server vendor. Vendors make assumptions about the nature of the target applications to improve performance. If these assumptions are invalid, then the resulting performance will be abysmal.

Finally, for the highest volume systems, control over the internal details of the application server is necessary so that it can be adjusted to support the requirements of an application. This transparency is essential for the highest level of tuning.

Unfortunately, as we will see in the next section, .Net technologies not only lock you into Microsoft technologies, but also onto the Wintel architecture for the foreseeable future. These systems are not capable of scaling to the memory and processor power of Unix systems from IBM, HP, and Sun. Scaling must be accomplished through the use of distributed computing architectures and multiple boxes. These are much more difficult to support, implement, and deploy.

Organizations should be very cautious if a significant amount of development only works on one proprietary platform. Fortunately, it is very easy to mitigate these risks. For server development, writing to Unix platforms ensures a large palette of options from proprietary platforms, such as Solaris, AIX, and OSX, to open-source platforms, such as Linux and FreeBSD. The huge investment in development tools can be preserved by using open-source tools for configuration-management such as CVS, scripting languages such as Perl or Python, and build languages such as the excellent Ant tool. As is discussed in the next section, development should be in a language that can be easily moved across platforms.

6. Portability

One way to solve performance problem is to switch platforms, much as the Mono project is trying to 'port' .Net to Linux.

The first question must be: how do you port something that is ill-defined?

For the moment, assume that .Net = C#, Microsoft's new programming language that is included with .Net. There are two requirements for full compatibility: exact-language semantics and compatible libraries. C++ serves as an excellent example of what will probably happen.

Exact-language semantics require that programs compiled on different platforms and compilers behave precisely the same. If they do not, subtle bugs will creep into systems that will be very difficult to isolate and fix.

One major problem is that C# does not have a clean-room specification of its memory model. A clean-room specification is required to duplicate the behavior of different implementations. Without this, Mono will never create a fully compatible memory model. Programs implemented on the different platforms will have slightly different but potentially devastating behaviors.

In C++, holes in the language specification make code produced with different compilers on the same platform behave differently. Incompatibilities include initializer order, destructor order and multiple-inheritance conflict resolution. These incompatibilities conspire to make cross-compiler compatibility for larger programs a very difficult task.

The bigger problem with the porting project is in the lack of solid library definitions. Without these definitions, it will be difficult for a developer to port an application between environments. For example, the importance of reference implementations can not be understated. Consider STL in C++. These are very simple, well-understood collection algorithms, yet the various implementations are not compatible. More-complex libraries, such as IO or location services, are never going to be "replicated" without help from Microsoft. Indeed, the initial release of Mono's C# comes with GTK# for interface-creation. Programs written to GTK# will be completely incompatible with the interface-creation libraries provided by Microsoft and only run on Linux boxes.

Sun solved the library-porting problem in Java in three ways. First, it provided a clean-room specification of its memory model as well as a reference implementation and complete specification for its byte-code generator. Second, Sun created reference implementations for all its libraries.

Third, implementers are required to implement all libraries in order to be certified and use the name Java. In a recent lawsuit, the judge even described Microsoft's lack of support for these libraries as "kneecapping Java" [15].

Without standard libraries for all of the .Net components on all platforms, any work done in .Net will lock you into that platform. Platform-lock is deadly in the Internet world; as volumes increase, it is crucial that organizations are free to use different platforms to handle scaling issues without major rewrites. Actually, platform-lock fits one of the fundamental goals of Microsoft's .Net initiative: dominating the server market.

However, .Net does not equal C#. An equally grave concern is whether or not other implementations will be allowed to access essential .Net services such as Passport and MyServices (formerly Hailstorm). Historically, Microsoft has used proprietary information to break well-established protocols such as Kerberos [16] and then used licensing agreements to prohibit developers from reverse-engineering the specification. It is a stated goal of Microsoft to prevent competition on the Internet by "de-commodifying protocols" [17].

This is where open source is at a grave disadvantage. As shown by Blackdown's excellent open-source Java implementation, there is no doubt that these open-source alternatives can produce excellent versions of the C# language and VM. However, Microsoft has already announced its intention to use its patents to prevent open-source alternatives to .Net, such as Open .Net and Mono, from duplicating its proprietary APIs. This will make portability impossible. In addition, the lack of a clean-room spec will prevent applications running on different versions of the C# VM from running with precisely the same semantics and create subtle, possibly devastating differences.

A cloudy immediate future

.Net and Web services may end up being of much importance to organizations, but at this stage, their benefits are unclear. It is crucial for an organization ask the most important question: What is it that my customers need? Risk and cost are then balanced to decide what products to produce. Technologies are chosen to create the needed products with the least amount of cost and risk.

Currently, development on the .Net platform is high-risk for several reasons: the initiative is ill-defined, it will likely inherit the security problems that have riddled other Microsoft products, it will not be supported equally across all Microsoft platforms, and, because much of it does not exist, stability and performance have yet to be demonstrated.

Finally — and most important within the Internet environment — there are no established mechanisms to allow compatibility across platforms. This guarantees that an organization will not only be locked into Microsoft platforms, but also Microsoft services such as Passport and MyServices.

Organizations can mitigate risk and cost by using proven technologies that do not tie investment to particular platforms — technologies such as Java, Linux, BSD, Perl, Python, and SOAP. One of the most interesting side effects of the open-source movement has been to increase the number of options so that organizations can choose the correct tool for any application without becoming slaves to a particular vendor.

However, organizations may want to create demonstration products using .Net technologies to see if the risks can be significantly reduced. That way, these products would be ready to deploy when the technology finally arrives and a business model becomes apparent. If and when .Net accomplishes Microsoft's goals, a reasonable migration strategy will be necessary. Given the difficulties, this migration probably will not be required within the next year.

More Stories By Carmine Mangione

Carmine Mangione is founder and CTO of X-Spaces, Inc., a software-engineering training firm. Mangione has taught classes for many Fortune 500 companies, including Boeing, Countrywide Mortgage, and Citibank. Mangione contributes a long-running column to Enterprise Developer Magazine, and his articles have recently appeared in publications such as JavaWorld and NCWorld.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...