wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

Packet re-ordering is bad.

September 13, 2015 by kostadis roussos Leave a Comment

One of the weirdest things at Juniper was the obsession the networking teams had about reordering packets. They kept talking about how applications could not tolerate reordering.

And this confused me to no end.

After all TCP was based on the assumption of packets being reordered and out-of-sequence and surviving that mess?

And then it was explained to me as if I was the networking NOOB that I am. The problem is that when a packet gets reordered TCP doesn’t perform as well as when the packet gets sent in order. And there are scenarios where TCP will assume that the network is congested if the packet doesn’t get sent in time and will slow down the network connection.

And so to work around the TCP protocol thinking it understands what is going on in the network, ASIC engineers do heroics to ensure that packets flow through routers in order.

Then I read this today and I was reminded of those conversations:

http://gafferongames.com/2015/09/12/is-it-just-me-or-is-networking-really-hard/

There are all sorts of very interesting applications that run over the Internet that really are just pumping packets and want them arriving in order or not at all.

And that because of these applications the design complexity of routers is vastly more complex than if the layers above the network did not assume anything about a reordered packet.

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software

The completely misunderstood IOPS

September 1, 2015 by kostadis roussos Leave a Comment

I was recently in a meeting about performance where the discussion turned to how many IOPS was the database doing.

And what was interesting was how much of our thinking about performance is formed in a world where IOPS are a scarce resource because the underlying media was soooo slow.

In the modern, post spinning rust world, IOPS are practically free. The bottleneck is not the underlying media, SSD’s and later things like 3D Xpoint memory (what a horrible, horrible name for such an important technology) have essentially free IOPS. The bottleneck is no longer the media (disk drive) but instead the electronics that sit in front of the media.

The electronics include things like networks, memory busses, and CPU’s. We are now bandwidth and CPU constrained, no longer media constrained. What that means is – of course – interesting.

One practical consideration is that looking to optimize IOPS is no longer a worthy effort. Instead, we should be looking at CPU and Memory cost per IOP. And we should be willing to trade off some CPU and Memory for more IOPS to improve overall system behavior.

For folks, like myself, who grew up working really hard to try and avoid doing disk operations, embracing IO is going to be hard…

And like a buddy of mine once said, these material scientists keep investing these new exotic technologies that keep us system software engineers busy.

It’s a good time to work in systems.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software, Storage

Metrics over usability 

August 30, 2015 by kostadis roussos Leave a Comment

  
This is the kind of shit that drove Zynga customers nuts.

In an attempt to drive metrics to other features … We add friction to the top activity… I didn’t know about collages and not do I care to know about them and I certainly don’t want to be reminded of them all of the time.

I used to be able to just enter a status, now I have to pick one.

This is just another example of egregious Facebook metric driven feature – like the hyper aggressive attempts to get me to turn on notifications.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

It’s all about the constants

May 21, 2015 by kostadis roussos Leave a Comment

When I was a student in college, I learned about algorithmic complexity and in particular about the Big O notation.

And what I found fascinating is how the folks who studied algorithms ignored constants.

Over the last 20 or so years, I have begun to realize that I made a career about caring about the constants. That my entire engineering career has been about something that folks who care about algorithms dismiss as irrelevant.

And that got me thinking. As a software engineer, I tend to think that I am making code go faster.

And then I worked at Juniper and the irrelevancy of my efforts on performance became clear.

Physicists make hardware go faster.

Algorithm designers make hardware and software go faster.

The rest of us just sit around trying to tweak the constants. All of our obsession about tight code and efficient code is really a pointless exercise in worrying about constants.

And I got depressed.

And then I remembered that in the world I work in, constants do matter. Because constants map to hardware that people have to buy, and saving a small amount can translate into large amounts of savings.

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

Inside Onedrive

January 21, 2015 by kostadis roussos Leave a Comment

Ever since Microsoft gave away storage for a product I would buy anyways, I have been working to move ~1TB of storage to the cloud and have encountered many of the limitations of the service and learned a little bit of the technical underpinnings.

I suspect when the folks who created OneDrive imagined the service they thought pictures. And pictures have a reasonable size (1-2MB) and a small number of them. They did not imagine using one-drive to backup entire multi-100GB file systems.

And that impedance mismatch has been tough.

OneDrive uploads a small number of files at at time (2-4). OneDrive scans the entire FileSystem to find new files.

All of this will get fixed over time, I am sure although things are now kind of rough.

One of the more interesting things is that I have gotten some insight into how OneDrive is built.

OneDrive has four core elements

  1. Skydrive.exe that is the sync engine that actually copies the data to the cloud
  2. WSearch the search engine for windows doubles as a way for onedrive to keep track of the files on the file system.
  3. The use-of-stubs to manage offline files and provide the illusion of a single file system. Possibly the only good use of hierarchical storage management in the history of the snake oil known as HSM.
  4. A pretty UI

All of these technologies have been around for many many years and OneDrive is really a repackaging of all of them.

OneDrive, as a product, has the property of something that was cobbled together over time without any of the architectural integrity of competing products like DropBox or Google drive. This probably reflects the ambivalence Microsoft had towards cloud services. I am encouraged to read for Windows 10 Microsoft is working to improve Onedrive significantly.

One of the challenges for a company like Microsoft is that building a product that has the feature set of DropBox is easy, but building a competing product is a completely different can of worms. A competing product requires a deeper level of engineering than the cobbling of re-purposed technologies that the current Onedrive product is.

Microsoft’s decision to embrace DropBox may reflect that reality.

 

 

 

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software, Storage Tagged With: DropBox, Google Drive, Microsoft, OneDrive

Hardware not Software is Eating the World: Len was right.

November 28, 2014 by kostadis roussos 1 Comment

In 1996, Len Widra, a Principal Engineer at SGI, and I got into a heated argument over the importance of software vs. hardware.

I was a kid out of school with an ego to match. Of course, I thought I knew everything. The crux of our debate was whether the software was a relevant technology or important technology.

Len’s observation was that software was irrelevant or something like that. Hardware, he observed, was the important technology.

As a software engineer, this was infuriating. As a computer scientist, this wasn’t very kind. How dare he say that a bunch of silicon was more important than my code?

It’s been almost 18 years, and I’ve learned more.

What I have learned is that new software rarely, if ever, displaces old software unless some new hardware shows up. New hardware shows up, and that new hardware makes the old software irrelevant or obsolete.

There is one interesting caveat. Some software applications are really dependent on the quality of the algorithms, and as the algorithms improve, the software gets obsoleted regardless of the underlying hardware changes. In many cases, the emergence of new algorithms creates new hardware that helps obsolete the old software.

For the vast majority of software systems, however, that’s not the case.

When you are looking for a new opportunity in the technology space, what you need to look for is where new hardware is emerging. If it is sufficiently different, that new hardware will obsolete the old software that was tied to the new hardware creating new opportunities for new software.

A mouthful.

A few examples:

(1) the emergence of x86 servers created the opening for Linux. Before x86 servers were a reality, the UNIX vendors owned the entire software and hardware stack. When x86 became good enough, a new software stack could win because the software used new hardware.

(2) Flash in the storage industry has truly created a massive disruption, enabling many different kinds of software stacks.

(3) Merchant (aka Broadcomm) Silicon is disrupting the networking space that reminds me of the x86 disruption.

(4) ARM processors made mobile computing plausible.

Maybe my favorite example is this picture from TIOBE Software that measures the popularity of programming languages. TIOBE measures popularity – not use or lines of code – and has been doing that analysis for many years:

2014-11-28_0849

You look at the chart, and you realize how slowly programming language popularity changes except for one programming language: Objective-C. The popularity of a single programming language changed dramatically not because it was good or bad but because of a single new hardware platform that enabled new software.

The hardware disrupts because it enables software that was impossible before. The carefully calibrated trade-offs that are baked into a system are tossed into the sea with new hardware. When you want to look for disruptions to your business, never look at software; software is irrelevant; look at the hardware …

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, innovation, Software Tagged With: Disruption

x-platform mobile technologies

November 24, 2014 by kostadis roussos Leave a Comment

The folks at Google wrote about their new toolkit  for their new mail app. An app, by the way, that is actually great.

I was convinced – from using the app – that they had a lot of platform specific code. Instead, being great engineers, they cracked a hard nut – how do you build a UI rich application without writing most of the code twice that I didn’t think was going to get cracked anytime soon.

The challenge with UI rich products is that they must interact with the native software interface of the device. And the native interface is very different and is written in very different programming langues.

What Google has done is very interesting. My stock recommendation for UI rich applications  is that you have a core that is in C++ and a bunch of platform specific code for each device, the approach Google has taken may indicate a new third way. The Prezi guys did this to great effect. And many of my former Zynga colleagues are doing the same.

I must dig in some more…

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software, Zynga Tagged With: x-platform mobile

The bullish case for storage vendors in a world of free cloud storage?

November 15, 2014 by kostadis roussos 1 Comment

The arrival of free consumer storage was predicted a while ago. As more market entrants arrived and as the complexity of delivering online storage declined the value of just storing bits and bytes would naturally go to zero.

The question was what would that mean for the market as whole?

Aaron Levie, probably vastly ahead of the curve, figured this out and has repeatedly said that he’s moving up the stack in terms of capabilities.

The “race to zero” in cloud storage: “We see ourselves really in a ‘race to the top.’ And what we mean by that is, we are in a race to constantly add more and more value on top of storage, on top of computing, to deliver more capabilities, more kind of industry-unique experiences, greater depth of our platform functionality to the industries that we’re going after….If people confuse us with being in the storage business or whatever, that’s a cost of just the fact that there are just so many companies out there, there’s so much going on on the internet. ”

Read more: http://www.businessinsider.com/box-ceo-aaron-levie-qa-2014-11#ixzz3J5i3xZLg

 

The recent partnership between DropBox and MSFT is probably an example of how early entrants most current valuable asset is the large number of customers they already have. This “race to zero” is not just a problem for the Storage as a Service vendors, this problem of zero cost storage is a problem more broadly for all Software as a Service  vendors. Customers will expect to store arbitrary amounts of data within a SaaS  and not pay for the storage. This will include back-ups, archives etc of their data.

The challenge facing everyone in the SaaS market though is that although storage is  free to the end-user, there is a non-trivial cost to buying and managing storage and that the cost of the management and support and maintenance of that storage will increase over time as more customers use that storage. And worse, the importance of the storage will increase over time as more customer depend on the data existing and being accessible over time.

The paradox facing SaaS vendors is that storage is free, but data is priceless and to store data you need storage! And so they will have to invest in their storage infrastructure, the question is how.

In the first phase of SaaS, vendors could legitimately build their own storage infrastructures. The ability to monetize the storage as storage made it possible for the SaaS vendors to hire and invest and innovate in storage infrastructure. And innovation was necessary because traditional storage infrastructure simply did not meet the requirements that the new SaaS customers had either in scale or in deployment type or in cost.

What this means is that if you are a SaaS vendor and the current model of building your own is the only model means is that you will have to become over time become an increasingly great storage company to meet your expanding business needs even though that is not where you should be adding value. This will require hiring a slew of technology experts to solve problems that is fundamentally outside of your core business.

And given that there are more SaaS companies than storage experts this will both drive up salaries in storage experts AND …  also create a market for storage infrastructure services.

Why will it create a market? Because hiring will be hard, and companies will look for products that meet their needs and the dollars will be big and entrepreneurs will try and satisfy that need.

And what will eventually happen is that SaaS companies will just buy storage infrastructure from infrastructure vendors who will be able to make bigger and deeper investments than any single SaaS player. A single storage vendor can leverage his storage investments across more customers and can make bigger investments than any single SaaS players. And SaaS players will naturally gravitate to those vendors that can provide a high quality storage solutions. My theory is that sooner rather than later SaaS companies will focus on leveraging technology built by storage infrastructure vendors instead of rolling their own.

Why will they buy instead of build?

Storage vendors are better suited to deliver low cost reliable storage because they have deeper understanding of supply chains and a deeper bench of people working on the problem of making storage efficient than any single SaaS provider does. In effect, a small number of storage vendors supplying a large number of SaaS vendors will provide more cost-efficient storage than every SaaS vendor building storage infrastructure on their own.

But it’s all about the people.

Look even if you don’t buy anything else I said, let me leave you with this thought: Hiring a bunch of guys who know storage is hard. Retaining those guys is even harder. It’s much easier to buy a product from someone whose job it is to hire and retain a storage team, especially if the storage is a product you use – not something you directly monetize. A product you buy is a cost that is easier to control – you can spend more faster and cut spending faster with no real impact to the quality of the product over the long term. A product you build is a team you have to hire and layoff when you want to manage costs and that will impact the quality of the product over the long term.

This is huge

The last 7 seven years of my career, I have been rather bearish on the storage infrastructure business. Ironically because storage is not monetizeable but very valuable and doing it cheaply and efficiently and reliably is very important to any SaaS vendor, and because it’s impossible for every SaaS vendor to do it properly, and because the number of SaaS vendors is going to explode, I believe for the first time in ages that Storage sold by infrastructure vendors to SaaS players is going to grow very rapidly …

So I am bullish on storage

And I suppose as someone who once declared that enterprise storage is dead, that is a surprising conclusion.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software

How do box vendors get disrupted?

October 22, 2014 by kostadis roussos 2 Comments

af2d9864b93a712a13a6e5d77615e03e

One of the more interesting questions confronting anyone who works at a box company, like I do, is what causes a vendor to get disrupted?

There are a lot of business reasons, and technical reasons covered in a wide variety of sources…

My pet theory is the following:

A box vendor produces a box that has to be deployed as a box because of what it does. For example, to switch you need a box that can sit between three physical cables and make decisions about where to forward the packets.

Deploying a box is a pain in the ass. Replacing a box is hard.

And the problem is that once you, as a customer, deploy a box, you realize that you need the box to do more stuff.

And the vendor starts adding software features into the box to meet that need.

And at some point in time, the box vendor believes that the value is the software and not the box. And they are partly right, except that the only reason the customer is buying the software from the box vendor is because they must buy the box.

And the box over time becomes an increasingly complex software system that can do more and more and more and more.

And software engineers hate complexity. And where there is complexity there is opportunity to build something simpler. And competition tries to break into the market by making a simpler box.

The problem with the simpler box is that if the set of the things a customer needs to do is A, and you can do A/2 – you’re simpler and incomplete.  Inevitably you will become as complex as the original box.

What causes the disruption is when the customer no longer needs to deploy the box.

To pick an example that I can talk about, many vendors in the storage industry used spinning rust disk drives to store data. When customers decided that they no longer wanted to use spinning rust to store data, vendors like Nimble and Pure started to win in the market because they stored data in flash.

Nimble and Pure certainly didn’t have the feature set of their competitors – how could they. The reason they won deals was because the decision criteria for the customer wasn’t software it was the desire to store the data differently on a different kind of physical media – flash. The combination of a customer desire to store the data differently coupled with a simpler box made it possible for Nimble and Pure to win in the market place.

To put it differently Pure may, for all I know, have A/5 of the features of the competition, but if the first order decision is that you want to store data on flash in an external array, then that is irrelevant because you’re not comparing Pure to a spinning rust array, but Pure to another flash array. And there Pure has an advantage.

The networking industry has stubbornly resisted disruption for years. And part of the reason is that the physical box hasn’t really changed over time. Parts of the industry have changed, and overall the same leaders are still winning.

However, there is a possibility of a disruption in the networking industry, in particular, in the modern cloud data center.

The reason being that for the first time in a long time, the fundamental network stack may be re-wired in a very unique way.

In an earlier post, I discussed thee Network Database. In a traditional network, every network element has to be a full fledged participant in the Network Database.

And like traditional applications that have to interact with a database to do anything interesting, network services must also interact with the Network Database to do anything interesting.

And it turns out that building an application that uses the Network Database is hard, unless your application fits into that model and … well … runs on the network element.

Companies like to whine that network vendors are slow, maybe they are – or maybe the problem they are trying to solve in the way they are trying to solve it is just hard and takes time. Having worked with folks in this industry, I am convinced of the hardness thesis rather than the laziness thesis.

SDN – has the potential – to disrupt the model of software applications being built as distributed services running on multiple network elements. For one reason: it actually makes building network applications easier because it aligns with how the vast majority of programmers think. Building applications out of distributed protocols is hard. Building applications on a centralized database is easy. And there are claims that well you’ll need multiple databases to scale, and it turns out that too is easy – after all that’s what the web guys have been doing for years.

And that creates an interesting disruption in the network stack. That is different than flash and disk drives but potentially as massive.

The value of the software stack that the traditional vendors have built over time begins to diminish as more services get built using a different model. One argument is that it will take time for the new services to be as complete as the old model. And that is true. If you believe, however that the new programming model is more efficient and expands the pool of programmers by a step function, then the gap may be closed significantly faster.

Having said all of that, I am reminded of a saying:

Avec des si et des mais, on mettrait Pari does une bouteille.

The Network Box vendors are making their strategic play as well, and the industry will change and we will most likely still see the same players on top ….

 

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software

Lambda C++

September 27, 2014 by kostadis roussos Leave a Comment

After a large time gap between large C++ systems, been catching up on the language. Feels like meeting a high-school friend who you didn’t friend on Facebook.

One of the things that got me to realize that this was not my childhood’s C++ was the existence of lambdas.

At first, I was like: EWWWW… First we had Java envy and now we have Scala envy… does anything ever change.

Except now that I am starting to dig into this little feature, the fact that you can write this piece of code is wicked convenient:

vector<int> v;
v.push_back( 1 );
v.push_back( 2 );
//...
for_each( v.begin(), v.end(), [] (int val)
{
    cout << val;
} );

My personal frustration with using STL may be finally overcome…

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

  • « Previous Page
  • 1
  • 2
  • 3
  • Next Page »
 

Loading Comments...
 

    %d