wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

The systems disruption in networking

November 16, 2014 by kostadis roussos 4 Comments

As I spend more time learning about Networking, I begin to realize that there are significant disruptions happening and they are not as obvious as “SDN” or “Open Flow”.

One of the most significant differences between a networking device and a server is that the data path stays entirely inside  of a custom ASIC instead of running on a general purpose multi-processor.

Why is this important?

When a customer buys a device, what they are buying is the data path, the rest of the product is a second order purchasing decision. In a networking device, the data path is entirely in the hardware, and therefore – although software is important – it’s significantly less important than the ASIC that processes the packets.

A networking company – for all of the discussion about software – is a hardware company that builds an ASIC that processes packets. They live and die by the success of their hardware and by hardware I mean ASIC.

Historically a networking company built it’s chips and sold a device.

The disruption that is happening in the networking space is that – increasingly – a large chunk of the networking ASIC’s are being built for increasingly important segments by vendors that sell to multiple networking companies.

This is very similar to what happened in the late 90’s when Intel started killing off all of the custom microprocessors in the server market.

A simplistic and silly view is that as a result of this disruption is that everyone is going to go and build white-box switches and routers all of a sudden. For some customers wh have the depth of skill and expertise to build their own networking gear this will happen  but for the same reasons I am bullish on storage I am bearish on this happening broadly. The majority of the market will continue to buy gear from networking companies.

The disruption that Intel created in the 1990’s was that companies like SGI that built everything soup-to-nuts had to suddenly go from hardware companies that had software to a systems company.

Customers used to buy SGI hardware to get access to the massive number of processors or the high end graphics engines and they didn’t really care about the software running on them too much. The selling point was the hardware.

As the number of vendors that could deliver the hardware that SGI produced customers went from buying whatever hardware SGI produced running whatever software SGI put on the hardware, to thinking about the whole system and that created more options for the customers.

In effect, the decision criteria went from being what SPECint or SPECfp a MIPS processor had to what is your TPC-C and TPC-D number and whether you were a key partner’s of Oracle.

So what is a systems company then?

A systems company delivers a device that is balanced between hardware, software and packaging. The balance of the three components creates a unique selling proposition that the customer and market is willing to put a premium on.

Systems companies emerge in markets where the hardware is supplied by a small number of vendors and differentiation is created in how you combine the hardware and use the hardware in software and create a package that the customer can buy.

The challenge for systems companies is that sophisticated customers can build the same product from the same components and this creates a pressure to innovate in lots of areas outside of the core components.

The challenge for a hardware company as it makes the shift to a systems company is that the mindset of how you build a product has to change. Whereas in the hardware centric world – the hardware is built and the software guys have to figure out how to make it work, in a systems world there is a fine balancing act between the two. And whereas in a pure hardware world the hardware performance was the be-all of your differentiation, the systems company has a combination of attributes in software, packaging and hardware that create the unique differentiation.

In fact, perversely, how you build software and the easiest way to build software becomes more important over time than any specific hardware platform. And the choice of components becomes more important than any specific component.

The challenge for networking companies is that system design is not where their core focus has been over the last 20 year.s

This change in how you build networking devices from – here’s an ASIC with software to here’s a complete balanced and well integrated system is going to be disruptive to how companies do business.

Now that I said all of that, let me observe, I don’t think my employer, Juniper, or Cisco are in danger of getting disrupted per-se. I just think that the way systems will get built is going to be very different and that that change is going to be very interesting.

And that in my mind is where the real disruption in networking is happening, everything else may just be noise.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation Tagged With: Disruption, Networking

The bullish case for storage vendors in a world of free cloud storage?

November 15, 2014 by kostadis roussos 1 Comment

The arrival of free consumer storage was predicted a while ago. As more market entrants arrived and as the complexity of delivering online storage declined the value of just storing bits and bytes would naturally go to zero.

The question was what would that mean for the market as whole?

Aaron Levie, probably vastly ahead of the curve, figured this out and has repeatedly said that he’s moving up the stack in terms of capabilities.

The “race to zero” in cloud storage: “We see ourselves really in a ‘race to the top.’ And what we mean by that is, we are in a race to constantly add more and more value on top of storage, on top of computing, to deliver more capabilities, more kind of industry-unique experiences, greater depth of our platform functionality to the industries that we’re going after….If people confuse us with being in the storage business or whatever, that’s a cost of just the fact that there are just so many companies out there, there’s so much going on on the internet. ”

Read more: http://www.businessinsider.com/box-ceo-aaron-levie-qa-2014-11#ixzz3J5i3xZLg

 

The recent partnership between DropBox and MSFT is probably an example of how early entrants most current valuable asset is the large number of customers they already have. This “race to zero” is not just a problem for the Storage as a Service vendors, this problem of zero cost storage is a problem more broadly for all Software as a Service  vendors. Customers will expect to store arbitrary amounts of data within a SaaS  and not pay for the storage. This will include back-ups, archives etc of their data.

The challenge facing everyone in the SaaS market though is that although storage is  free to the end-user, there is a non-trivial cost to buying and managing storage and that the cost of the management and support and maintenance of that storage will increase over time as more customers use that storage. And worse, the importance of the storage will increase over time as more customer depend on the data existing and being accessible over time.

The paradox facing SaaS vendors is that storage is free, but data is priceless and to store data you need storage! And so they will have to invest in their storage infrastructure, the question is how.

In the first phase of SaaS, vendors could legitimately build their own storage infrastructures. The ability to monetize the storage as storage made it possible for the SaaS vendors to hire and invest and innovate in storage infrastructure. And innovation was necessary because traditional storage infrastructure simply did not meet the requirements that the new SaaS customers had either in scale or in deployment type or in cost.

What this means is that if you are a SaaS vendor and the current model of building your own is the only model means is that you will have to become over time become an increasingly great storage company to meet your expanding business needs even though that is not where you should be adding value. This will require hiring a slew of technology experts to solve problems that is fundamentally outside of your core business.

And given that there are more SaaS companies than storage experts this will both drive up salaries in storage experts AND …  also create a market for storage infrastructure services.

Why will it create a market? Because hiring will be hard, and companies will look for products that meet their needs and the dollars will be big and entrepreneurs will try and satisfy that need.

And what will eventually happen is that SaaS companies will just buy storage infrastructure from infrastructure vendors who will be able to make bigger and deeper investments than any single SaaS player. A single storage vendor can leverage his storage investments across more customers and can make bigger investments than any single SaaS players. And SaaS players will naturally gravitate to those vendors that can provide a high quality storage solutions. My theory is that sooner rather than later SaaS companies will focus on leveraging technology built by storage infrastructure vendors instead of rolling their own.

Why will they buy instead of build?

Storage vendors are better suited to deliver low cost reliable storage because they have deeper understanding of supply chains and a deeper bench of people working on the problem of making storage efficient than any single SaaS provider does. In effect, a small number of storage vendors supplying a large number of SaaS vendors will provide more cost-efficient storage than every SaaS vendor building storage infrastructure on their own.

But it’s all about the people.

Look even if you don’t buy anything else I said, let me leave you with this thought: Hiring a bunch of guys who know storage is hard. Retaining those guys is even harder. It’s much easier to buy a product from someone whose job it is to hire and retain a storage team, especially if the storage is a product you use – not something you directly monetize. A product you buy is a cost that is easier to control – you can spend more faster and cut spending faster with no real impact to the quality of the product over the long term. A product you build is a team you have to hire and layoff when you want to manage costs and that will impact the quality of the product over the long term.

This is huge

The last 7 seven years of my career, I have been rather bearish on the storage infrastructure business. Ironically because storage is not monetizeable but very valuable and doing it cheaply and efficiently and reliably is very important to any SaaS vendor, and because it’s impossible for every SaaS vendor to do it properly, and because the number of SaaS vendors is going to explode, I believe for the first time in ages that Storage sold by infrastructure vendors to SaaS players is going to grow very rapidly …

So I am bullish on storage

And I suppose as someone who once declared that enterprise storage is dead, that is a surprising conclusion.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software

Some observations on DevOps?

November 3, 2014 by kostadis roussos 2 Comments

At Zynga, I had the extraordinary good fortune to lead an extraordinary team that later on in life we might have called DevOps.

We lacked the vocabulary that exists today, and we certainly lacked the insight that many people later in life have created.

Instead of trying to explain what Zynga did, I thought it might be interesting to discuss what drove us to create a function that a lot of people call Dev Ops and we called systems engineering.

First, some observations.

Any web scale application has two parts:

  1. Application code
  2. Platform code

And the whole discussion of DevOps hinges on the definition of Platform.

In a traditional Web App, circa let’s say 2003, the platform was a LAMP stack. Deploying the platform was easy, debugging the platform was easy.

In 2008, as web apps routinely started to scale to millions of DAU, and exceeded the performance of a single node, platform innovation became de-rigeur in the industry.

We had to innovate at every layer from how the database is built resulting in a proliferation of no-sql and SQL databases, to messaging, to languages, to monitoring and to development tools.

Layered on top of this innovation was the emergence of programmable infrastructures. In this day and age of AWS, the way the world was in 2004 may feel very alien. You didn’t have large pools of servers connected with very large flexible networks that you could just transform with software packages into whatever kind of database, storage, compute node you wanted. You had very specific boxes, that had to be cabled and racked and stacked. There was very little dynamic flexible infrastructure outside of perhaps the Googleplex.

Because the platform was under such a radical burst of change, and because of the programmable nature of the infrastructure itself, the basic relationship between developers and operations changed. Whereas before operations created a fairly boiler plate platform infrastructure that developers programmed to, now developers were creating a platform that in turn application developers programmed to.

At Zynga the platform and the application were often delivered at the same time. As we needed new features, we delivered new platform capabilities to our game developers.

In this environment, the platform developers could not just very well hand off the operations of new platform components to an operations team that had no clue how to manage the new platform pieces.

And as a result, developers had to assume more responsibility for operations than they had in the past. Because the developers tended to have more sophisticated programming skills than the operations teams, the operations tended to involve significant more automation and programming than what had existed in the past. And as a result a new kind of tools began to emerge.

Systems Engineering – at Zynga – was the peculiar merging of platform operations and platform development. Or what we might call DevOps. The core goal of the systems engineering group was to minimize application developer impact of the changing platform.

What I observed is that as platform components became standardized, the need for DevOps for those components to be managed by a pure DevOps team declined and those components transitioned to a more traditional operations team. And in fact, most DevOps teams worked really hard to get out of the DevOps business by enabling operations team to manage their infrastructure.

We did blur the lines between Operations and Development, but that was out of necessity not because it was efficient. In fact, we discovered that having highly trained and expensive engineers doing operations was a waste of time and money. Similarly having weak programmers writing complex software systems rarely got the result you wanted. The specialization and the separation of roles was a good thing and the blurring of lines was a good thing.

For me, DevOps is part of a complete engineering organization at scale: application developers, platform developers, dev ops, and operations are required to win.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Physics and Computer Games and Big Data

October 30, 2014 by kostadis roussos Leave a Comment

55794177

Over the last 15 years, there have been two useful heuristics for figuring out where computing is going.

When I want to look at how applications are going to be built, I look at games. After all games are at the forefront of creating new kinds of digital experiences, and the need to push the boundaries of how we entertain ourselves is a crucial to create new revenue and sales opportunities.

When I want to look at how infrastructure is going to change, I look at what people want to do in the super computer space.

Two nights ago, I had the marvelous opportunity to hear a talk that was a discussion of Physics and Big Data. As a software infrastructure guy, at the end of the day I like to think about how to build systems that enable applications, I have been wondering if Big Data was going through a bigger-faster-stronger phase or whether there were new intrinsic problems.

And the answer is yes to both.

Clearly we need systems that can do more analysis faster, store more data at cheaper costs, etc.

What was not as obvious was that exponential increase in transistors coupled with the disruptive trend of 3D printing was going to enable:

  1. A proliferation of very sensitive distributed sensors that need to be calibrated and whose data needs to be collected.
  2. The ability to find even weaker signals in the data.

In effect, we were going to be able collect more data faster and because of that we will be able to find things that we could not find before. And solving 1 and 2 on it’s own are very interesting problems that can keep me busy for the next 10 years …

However there are some new problems that come out of that:

dn1xs

We will need to be able to find new ways to explore data and track our exploration through the data.

dn1ux

 

We will need ways to combine the datasets we create. After all as more and more sensors get created, and sensors get cheaper, the ability to combine data sets will become crucial. And as the scale of the datasets grows, an ETL becomes less realistic.

And then to make this all more interesting, there is some thought that the way we collect data itself may create signals and that meta-analysis of the data will be required. And how you do that is an interesting problem on itself. And how do you create systems can correct for that…

My head has sufficiently exploded. Turns out that just making things go faster isn’t the only problem worth solving…

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation Tagged With: Big Data, Computer Games, Physics, Super Computers

Hardware not Software is Eating the World

October 30, 2014 by kostadis roussos Leave a Comment

Andreesen made this amazingly brilliant comment that “software is eating the world” and it’s been ringing in my head for two years.

The idea is that software is doing more and more things in more and more places changing and restructuring the world as we know it.

And I agree.

My quibble is that software is actually the side effect of the real world eater and that is hardware. The relentless increase in the number of transistors at exponentially decreasing cost is truly the world eater.

The fact that we have these machines being added at this relentless pace is what is making it possible for software to do its thing.

If we consider a transistor as a machine, we are seeing an exponential increase in machines being deployed every year. And that exponential increase is making it possible for software to go deeper and deeper into our lives. And that exponential increase is making it possible for software to do more.

We have gotten so used to that increase that it almost seems natural to be drowning in transistors.

What is making this mobile cloud transformation possible is the explosion in the number of transistors per person.

Freemium became possible because the cost of compute per person became zero.

What guarantees another equivalently large transformation in 10 years is that the current sea of transistors will be only 3-10% of the total transistors deployed in 10 years. What appears to be an infinite computational infrastructure today will look puny and irrelevant in 10 years.

This relentless increase in transistors is what makes this industry so much fun.

And although I am a software guy, it is the hardware that is eating the world…

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Is this really about Google Docs?

October 27, 2014 by kostadis roussos Leave a Comment

With respect to MSFT today, Google and Box have been offering free unlimited storage for a while now. The difference is that Google and Box required you to spend about more than 40$ a month for a business plan.

My theory is that this latest announcement is really about Google.

I am a happy user of Google products. Even though Google Docs are less capable than their MSFT counterparts their freeness makes up for that.

Now that my storage requirements are starting to push me to pay for a larger storage plan, I am shopping for an alternative.

And MSFT new pricing for storage is going to make me look at MSFT for the first time in ages.

My theory is that MSFT is figuring that there are enough people like me who when confronted with the choice may buy the MSFT plan because it’s cheaper. This may help slow down the growth of Google Docs.

DropBox may be collateral damage.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Storage

Dropbox will start showing ads Real Soon Now

October 27, 2014 by kostadis roussos Leave a Comment

With MSFT dropping the marginal price of storage for consumers to 0, what does this mean for Dropbox?

The old business model that was really awesome was that each user represented a permanent annuity. As each user consumed more storage, over time, the user paid more. And as the user consumed more, the ability to move the users data declined. And with features like photo-sharing and data sharing, the ability to move data became harder and harder and harder.

Although I am sure that DropBox assumed the cost per GB would drop over time, the assumption in the plan has to be that it never got to 0 and it always increased as people stored more.

This was a sound business model until or unless the annuity goes away.

And that is what MSFT just did. They eliminated the annuity business. I am sure that DropBox will resist. But here is what will happen: as people who start pushing into the higher and higher tiers of cost start looking at their bills, the desire to move to cheaper solutions will outweigh the inconvenience. They will either move all of their data or start moving parts of their data to newer cheaper solutions.

The net effect is that with a cost of 0 dollars, it makes a lot of sense to use the free DropBox offering and then when you have to pay go to MSFT for any excess data.

Now Dropbox has to come up with a new plan. Their annuity strategy is crippled.

And the new plan may be advertising. DropBox was a storage company that offered file sharing on the cloud. Now they are a content repository with some nifty content management and content sharing tools for consumers. Companies that provide tools for consumers that can not grow their revenue as an annuity will turn to trying to monetize their customers more efficiently. And with all of that user data, the temptation to use it to advertise will be great.

Gmail made it okay to have your email automatically scanned for advertising, – i wish I could have seen the ads on General Petraeus account, you have to believe DropBox customers will be okay with this as well…

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Storage

And now unlimited – MSFT lays down the gauntlet

October 27, 2014 by kostadis roussos 4 Comments

Microsoft just announced that they are offering unlimited OneDrive storage for ~7$ a year month along with Office 360.

Monetization of capacity in the storage industry is very hard. The storage industry was able to do that because of the cabling limitations of controllers and disk drives. Eventually you needed to buy a new controller because you could add no more disk drives to a controller. In the cloud the consumer never has to buy another controller, so the requirement to buy stuff to increase capacity never happens.

The fact that capacity is now going to be free – you’re paying for Office360 is showing that to be true.

After all the cost per GB is 0.03, the ability that you can charge 3x the cost of the media (30$ vs 100$) for a terabyte is unsustainable.

First movers in this space offered the novelty of cloud capacity, now that the capability has been commoditized, the end game for vendors in this space is going to be – interesting.

Other vendors will have to react to this change. How is going to be very unclear.

Google will quickly follow with a similar offer. I expect Box to be forced to do the same since they are competing with Google and MSFT for the same customers. DropBox will fight the hardest to avoid doing this but they too will eventually collapse.

Edit: Fixed the pricing to be monthly instead of yearly. 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Storage

Bracket Computing Revealed!

October 24, 2014 by kostadis roussos Leave a Comment

My friend Jason Lango, CTO and Founder of Bracket Computing, is finally, YAY!, bringing Bracket out of stealth.

His ambition was always to create a great company that would build significant new technology. And Bracket’s initial announcements do not disappoint. He’s put together a great team and the team is targeting a great problem. Good things happen when you do that.

I am, and have been for a very long time, a huge fan – and best man at his wedding – of Jason so I am excited to learn about their new technology. Whenever we’ve worked together, he’s always done great work, and this is going to be no different…

And why I am so excited, because I might know what happened in Vegas but I only get to learn what goes on at Bracket after the public announcement 🙂

Definitely looking forward to learning more! No doubts that they will make great things happen!

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

How do box vendors get disrupted?

October 22, 2014 by kostadis roussos 2 Comments

af2d9864b93a712a13a6e5d77615e03e

One of the more interesting questions confronting anyone who works at a box company, like I do, is what causes a vendor to get disrupted?

There are a lot of business reasons, and technical reasons covered in a wide variety of sources…

My pet theory is the following:

A box vendor produces a box that has to be deployed as a box because of what it does. For example, to switch you need a box that can sit between three physical cables and make decisions about where to forward the packets.

Deploying a box is a pain in the ass. Replacing a box is hard.

And the problem is that once you, as a customer, deploy a box, you realize that you need the box to do more stuff.

And the vendor starts adding software features into the box to meet that need.

And at some point in time, the box vendor believes that the value is the software and not the box. And they are partly right, except that the only reason the customer is buying the software from the box vendor is because they must buy the box.

And the box over time becomes an increasingly complex software system that can do more and more and more and more.

And software engineers hate complexity. And where there is complexity there is opportunity to build something simpler. And competition tries to break into the market by making a simpler box.

The problem with the simpler box is that if the set of the things a customer needs to do is A, and you can do A/2 – you’re simpler and incomplete.  Inevitably you will become as complex as the original box.

What causes the disruption is when the customer no longer needs to deploy the box.

To pick an example that I can talk about, many vendors in the storage industry used spinning rust disk drives to store data. When customers decided that they no longer wanted to use spinning rust to store data, vendors like Nimble and Pure started to win in the market because they stored data in flash.

Nimble and Pure certainly didn’t have the feature set of their competitors – how could they. The reason they won deals was because the decision criteria for the customer wasn’t software it was the desire to store the data differently on a different kind of physical media – flash. The combination of a customer desire to store the data differently coupled with a simpler box made it possible for Nimble and Pure to win in the market place.

To put it differently Pure may, for all I know, have A/5 of the features of the competition, but if the first order decision is that you want to store data on flash in an external array, then that is irrelevant because you’re not comparing Pure to a spinning rust array, but Pure to another flash array. And there Pure has an advantage.

The networking industry has stubbornly resisted disruption for years. And part of the reason is that the physical box hasn’t really changed over time. Parts of the industry have changed, and overall the same leaders are still winning.

However, there is a possibility of a disruption in the networking industry, in particular, in the modern cloud data center.

The reason being that for the first time in a long time, the fundamental network stack may be re-wired in a very unique way.

In an earlier post, I discussed thee Network Database. In a traditional network, every network element has to be a full fledged participant in the Network Database.

And like traditional applications that have to interact with a database to do anything interesting, network services must also interact with the Network Database to do anything interesting.

And it turns out that building an application that uses the Network Database is hard, unless your application fits into that model and … well … runs on the network element.

Companies like to whine that network vendors are slow, maybe they are – or maybe the problem they are trying to solve in the way they are trying to solve it is just hard and takes time. Having worked with folks in this industry, I am convinced of the hardness thesis rather than the laziness thesis.

SDN – has the potential – to disrupt the model of software applications being built as distributed services running on multiple network elements. For one reason: it actually makes building network applications easier because it aligns with how the vast majority of programmers think. Building applications out of distributed protocols is hard. Building applications on a centralized database is easy. And there are claims that well you’ll need multiple databases to scale, and it turns out that too is easy – after all that’s what the web guys have been doing for years.

And that creates an interesting disruption in the network stack. That is different than flash and disk drives but potentially as massive.

The value of the software stack that the traditional vendors have built over time begins to diminish as more services get built using a different model. One argument is that it will take time for the new services to be as complete as the old model. And that is true. If you believe, however that the new programming model is more efficient and expands the pool of programmers by a step function, then the gap may be closed significantly faster.

Having said all of that, I am reminded of a saying:

Avec des si et des mais, on mettrait Pari does une bouteille.

The Network Box vendors are making their strategic play as well, and the industry will change and we will most likely still see the same players on top ….

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software

  • « Previous Page
  • 1
  • …
  • 15
  • 16
  • 17
  • 18
  • 19
  • …
  • 21
  • Next Page »
 

Loading Comments...
 

    %d