wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

Dropbox will start showing ads Real Soon Now

October 27, 2014 by kostadis roussos Leave a Comment

With MSFT dropping the marginal price of storage for consumers to 0, what does this mean for Dropbox?

The old business model that was really awesome was that each user represented a permanent annuity. As each user consumed more storage, over time, the user paid more. And as the user consumed more, the ability to move the users data declined. And with features like photo-sharing and data sharing, the ability to move data became harder and harder and harder.

Although I am sure that DropBox assumed the cost per GB would drop over time, the assumption in the plan has to be that it never got to 0 and it always increased as people stored more.

This was a sound business model until or unless the annuity goes away.

And that is what MSFT just did. They eliminated the annuity business. I am sure that DropBox will resist. But here is what will happen: as people who start pushing into the higher and higher tiers of cost start looking at their bills, the desire to move to cheaper solutions will outweigh the inconvenience. They will either move all of their data or start moving parts of their data to newer cheaper solutions.

The net effect is that with a cost of 0 dollars, it makes a lot of sense to use the free DropBox offering and then when you have to pay go to MSFT for any excess data.

Now Dropbox has to come up with a new plan. Their annuity strategy is crippled.

And the new plan may be advertising. DropBox was a storage company that offered file sharing on the cloud. Now they are a content repository with some nifty content management and content sharing tools for consumers. Companies that provide tools for consumers that can not grow their revenue as an annuity will turn to trying to monetize their customers more efficiently. And with all of that user data, the temptation to use it to advertise will be great.

Gmail made it okay to have your email automatically scanned for advertising, – i wish I could have seen the ads on General Petraeus account, you have to believe DropBox customers will be okay with this as well…

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Storage

And now unlimited – MSFT lays down the gauntlet

October 27, 2014 by kostadis roussos 4 Comments

Microsoft just announced that they are offering unlimited OneDrive storage for ~7$ a year month along with Office 360.

Monetization of capacity in the storage industry is very hard. The storage industry was able to do that because of the cabling limitations of controllers and disk drives. Eventually you needed to buy a new controller because you could add no more disk drives to a controller. In the cloud the consumer never has to buy another controller, so the requirement to buy stuff to increase capacity never happens.

The fact that capacity is now going to be free – you’re paying for Office360 is showing that to be true.

After all the cost per GB is 0.03, the ability that you can charge 3x the cost of the media (30$ vs 100$) for a terabyte is unsustainable.

First movers in this space offered the novelty of cloud capacity, now that the capability has been commoditized, the end game for vendors in this space is going to be – interesting.

Other vendors will have to react to this change. How is going to be very unclear.

Google will quickly follow with a similar offer. I expect Box to be forced to do the same since they are competing with Google and MSFT for the same customers. DropBox will fight the hardest to avoid doing this but they too will eventually collapse.

Edit: Fixed the pricing to be monthly instead of yearly. 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Storage

Bracket Computing Revealed!

October 24, 2014 by kostadis roussos Leave a Comment

My friend Jason Lango, CTO and Founder of Bracket Computing, is finally, YAY!, bringing Bracket out of stealth.

His ambition was always to create a great company that would build significant new technology. And Bracket’s initial announcements do not disappoint. He’s put together a great team and the team is targeting a great problem. Good things happen when you do that.

I am, and have been for a very long time, a huge fan – and best man at his wedding – of Jason so I am excited to learn about their new technology. Whenever we’ve worked together, he’s always done great work, and this is going to be no different…

And why I am so excited, because I might know what happened in Vegas but I only get to learn what goes on at Bracket after the public announcement 🙂

Definitely looking forward to learning more! No doubts that they will make great things happen!

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

How do box vendors get disrupted?

October 22, 2014 by kostadis roussos 2 Comments

af2d9864b93a712a13a6e5d77615e03e

One of the more interesting questions confronting anyone who works at a box company, like I do, is what causes a vendor to get disrupted?

There are a lot of business reasons, and technical reasons covered in a wide variety of sources…

My pet theory is the following:

A box vendor produces a box that has to be deployed as a box because of what it does. For example, to switch you need a box that can sit between three physical cables and make decisions about where to forward the packets.

Deploying a box is a pain in the ass. Replacing a box is hard.

And the problem is that once you, as a customer, deploy a box, you realize that you need the box to do more stuff.

And the vendor starts adding software features into the box to meet that need.

And at some point in time, the box vendor believes that the value is the software and not the box. And they are partly right, except that the only reason the customer is buying the software from the box vendor is because they must buy the box.

And the box over time becomes an increasingly complex software system that can do more and more and more and more.

And software engineers hate complexity. And where there is complexity there is opportunity to build something simpler. And competition tries to break into the market by making a simpler box.

The problem with the simpler box is that if the set of the things a customer needs to do is A, and you can do A/2 – you’re simpler and incomplete.  Inevitably you will become as complex as the original box.

What causes the disruption is when the customer no longer needs to deploy the box.

To pick an example that I can talk about, many vendors in the storage industry used spinning rust disk drives to store data. When customers decided that they no longer wanted to use spinning rust to store data, vendors like Nimble and Pure started to win in the market because they stored data in flash.

Nimble and Pure certainly didn’t have the feature set of their competitors – how could they. The reason they won deals was because the decision criteria for the customer wasn’t software it was the desire to store the data differently on a different kind of physical media – flash. The combination of a customer desire to store the data differently coupled with a simpler box made it possible for Nimble and Pure to win in the market place.

To put it differently Pure may, for all I know, have A/5 of the features of the competition, but if the first order decision is that you want to store data on flash in an external array, then that is irrelevant because you’re not comparing Pure to a spinning rust array, but Pure to another flash array. And there Pure has an advantage.

The networking industry has stubbornly resisted disruption for years. And part of the reason is that the physical box hasn’t really changed over time. Parts of the industry have changed, and overall the same leaders are still winning.

However, there is a possibility of a disruption in the networking industry, in particular, in the modern cloud data center.

The reason being that for the first time in a long time, the fundamental network stack may be re-wired in a very unique way.

In an earlier post, I discussed thee Network Database. In a traditional network, every network element has to be a full fledged participant in the Network Database.

And like traditional applications that have to interact with a database to do anything interesting, network services must also interact with the Network Database to do anything interesting.

And it turns out that building an application that uses the Network Database is hard, unless your application fits into that model and … well … runs on the network element.

Companies like to whine that network vendors are slow, maybe they are – or maybe the problem they are trying to solve in the way they are trying to solve it is just hard and takes time. Having worked with folks in this industry, I am convinced of the hardness thesis rather than the laziness thesis.

SDN – has the potential – to disrupt the model of software applications being built as distributed services running on multiple network elements. For one reason: it actually makes building network applications easier because it aligns with how the vast majority of programmers think. Building applications out of distributed protocols is hard. Building applications on a centralized database is easy. And there are claims that well you’ll need multiple databases to scale, and it turns out that too is easy – after all that’s what the web guys have been doing for years.

And that creates an interesting disruption in the network stack. That is different than flash and disk drives but potentially as massive.

The value of the software stack that the traditional vendors have built over time begins to diminish as more services get built using a different model. One argument is that it will take time for the new services to be as complete as the old model. And that is true. If you believe, however that the new programming model is more efficient and expands the pool of programmers by a step function, then the gap may be closed significantly faster.

Having said all of that, I am reminded of a saying:

Avec des si et des mais, on mettrait Pari does une bouteille.

The Network Box vendors are making their strategic play as well, and the industry will change and we will most likely still see the same players on top ….

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software

Software Archaeologists

September 17, 2014 by kostadis roussos Leave a Comment

2014-09-16_2231

Work at a large company with a massive code base that has evolved over many years, and you eventually have to engage in Software Archaeology.

Software archaeology is the process of trying to understand critical software systems that are poorly documented, poorly understood or not understood at all and super-critical to your  system.

Imagine you have some module that you are dependent on that has worked for years, and then a bug is uncovered.

You have to go and learn what exactly the module does. By the time you look at the code, the original authors, and their children have all left the company or even if they haven’t its been years since they worked on the code… Sometimes it’s been years since they looked at code period, now being managers or directors or vice-presidents…

At times you can feel like you’re one of those modern explorers violating some tribal rules by exploring in areas that are forbidden. And if you have to modify the code, you wonder if you are like Indiana Jones about to remove the gold idol…


The problem isn’t understanding the structure of the code, software is software. The problem is understanding the intent of the code: understanding where choices were made that were well reasoned, and where choices were made that were expedient. And the problem isn’t even the software that you have to inspect, but that it’s part of a broader sea of software that the ancients wrote that is equally opaque and mysterious.

And the real problem is that static analysis is fine, but what you really need is to understand is how the running system behaves. How it uses memory, how it uses the CPU, how the data structures grow and shrink, what the heart beat of the system is…

At SGI in the late 90’s I did some compiler research as a master’s student to try and address this specific problem. In particular how do you infer things like locking hierarchies in a multi-million line code base when the original authors of the code have since left? Just reading the code is insufficient. Knowing that locks are taken is important, but you also need to understand things about contention, frequency of locks, interplay between systems that are so widely split apart to be mysterious …

When I tried to solve the problem, I looked at using compilers to go and insert code everywhere where something looked like a lock and then use the testing infrastructure to find the locks and their hierarchies…

And then I discovered that the code of the ancients because it was always working has no tests.

Doing your run-time analysis involves figuring out how to test things that always worked. And then you uncover not just one bug, but hundreds… Or are your tests broken because you don’t understand the code… And you wonder as you play with this mysterious gadget… Are you about to destroy the world? Or save it? How many of these bugs are real? And how many of them are real but other pieces of software have worked around those bugs making the whole system work?

And what will be the blow-back of fixing them…

 

And while you sit there with the svn commit about to change the structure of the code of the ancients, you wonder if your hubris is about to bite you in the ass… How could code that has worked for so long, be broken?

And your management team looks at you like those tribal leaders who looked at the explorers, with suspicion and doubt and fear. And you can hear them telling the village youth to arm themselves and kill the interloper before he causes too much harm… Or are they like the hot woman or man begging the evil villain to not destroy the world, or asking the hero:

Are you sure this is going to work?

No fear, the ancients were humans, just like you, you tell them… And that bag of sand is about the same weight as the idol…

Run Indy! Run!

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

The iPhone did change everything revisiting my predictions from 7 years ago.

September 12, 2014 by kostadis roussos Leave a Comment

7 years ago, I wrote a post questioning the Apple Fan Boys statements that the iPhone changed everything.

I got somethings very right. And I, obviously, got some things very wrong.

My assumption that unlike the iPod Apple would not rule the cellphone market turned out to be correct. My other assumption the iPhone would push the design of phones like the original Mac did turned out to be right.

My original assumption that MS and the cellphone vendors would create a viable alternative ecosystem that would own a much broader chunk than Apple turned out to be partially correct. Instead of MS, the real winner was Google who released the Android.

 

Let’s look at what I said and got right and wrong.

  1. The iPhone was crucial to Apple. And yes it was.
  2. The iPhone was going to push technology trends. Oh boy did it ever
  3. That it was going to be a marginal player. Oh boy was I wrong! WRONG. The share of profits is staggering.
  4. That integration with laptops was important. WRONG! WRONG! WRONG! Obviously I had no idea how much more important integration with cloud services was going to be.
  5. That Microsoft was going to be the bigger threat. OOOPS! Google wasn’t even mentioned! Of course, I wrote the post before Android shipped so that’s my defense. I could not imagine that it would take MS so frigging long to build a credible OS for the phone. And I suspect that Nadella will end that experiment soon.
  6. That the mobile phone providers were going to compete and push Apple into a niche. Right but I guessed wrong on who would do that. Samsung did. Nokia made the strategically boneheaded to go with Windows Mobile instead of Android. A bunch of other players did some good stuff.
  7. The laptop market that I thought was important turns out to be less important than I ever could have imagined for Apple.
  8. And of course, I completely misunderstood the app economy.

7 years later many it’s fun to point out how wrong everyone else got it – it’s even more fun to see how wrong YOU got it 😉

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Jobs

Why I hate the Great Filter known as Code Reviews

July 20, 2014 by kostadis roussos Leave a Comment

Code reviews have in the wrong environment enabled engineers to relieve themselves of the accountability for writing good code, turned technology leaders into powerless whiners, and enabled managers to ship on time crappy products while remaining blameless for the crap.

Much like the great filter that supposedly eliminates civilizations, code reviews are supposed to eliminate bad code and in the process of being used in that way, eliminate good code as well. The net effect is that the code reviewer is reviewing crap all day. And as a technologist that is very very annoying.

Let’s step back.

A lot of management culture in many large organizations is focused on making sure the lazy ass employees do their jobs correctly. The theory being that at scale, the average employee is … average.

Given the average nature of the average employee, then how do you make sure that the quality doesn’t degrade below average?

Hold the managers accountable for the quality of their work.

In tech companies, a key part of the work of engineers is writing code.

The approach some companies take is to have the managers accountable for the quality of the code.

The problem is that managers are also accountable for shipping on time. And the pressure to ship on time creates an unbearable pressure on the the managers to create a culture that pushes for date over quality over code.

The solution is to create a separate set of leaders who are responsible for technology quality, people like me who have titles like Distinguished Engineer, Architect etc.

The theory being that the tension between the technology leader and the manager will result in solutions that meet the business requirements for both date and quality.

An unfortunate outcome of this process is that the technology leaders are then tasked with ensuring that the code quality is good enough because that is their job. And the process that is used to ensure that code is good enough is the code review.

The problem, and this drives me nuts, is that in the wrong hands what happens is the accountability for the quality of the code is on the reviewer not the author.

A big bug escapes and the first question isn’t:

Who wrote the code

but

Who reviewed it?

Who tested it?

This creates a perception that the author of the code isn’t actually accountable for the bug. The author is, even though they wrote the code, blameless.

This makes the reviewer of the technology the organization’s bitch. On the one hand you’re accountable for the quality of the technology, on the other hand no one reports to you, and managers decide bonuses.

Guess what loses?

Yup, the quality of the technology…

How do you fix this mess?

The first is that you need to change the culture of what a code review is there for and you need to change the relationship between the manager and the technologist.

The code review has to be not about finding bugs but improving the overall quality of the product being produced. The way I like to describe it, everyone is supposed to be doing their best work, and reviews are to make great work better not filtering out bad work.

The manager and the technologist have to both be responsible for the quality of the technology and the date. If the project misses the date or the technology sucks they both failed.

Returning to the title of my post, code reviews when they act as a filter instead of a booster reflect a dysfunctional organization that is broken at it’s core. And the reason I have hated code reviews is that the problem wasn’t the review, the problem was the underlying first principles that motived their existence.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Rethinking the Internet of Things

July 11, 2014 by kostadis roussos 3 Comments

Over the last month I’ve been struggling with the Internet of Things. My scale has an internet connection, my TV has an internet connection, my toe has an internet connection and soon my watch will have an internet connection and managing the WIFI passwords and connectivity was a Pain-in-the-Ass.

I kept telling my wife that we need a better solution.

Turns out two very interesting technology trends are going to radically remake the internet of things.

The first is that 4G LTE chips are really cheap. For those not in the know, it used to be the case that building a chip that could do cellular was black-magic, but with 4G LTE this is no longer the case. Thus the how do I connect to the internet is really a how do I make a cellular call and that’s a much simpler user experience.

The second is that BlueTooth LE is transformative. If you think about the internet of things, the things are actually generating a small amount of data very frequently or infrequently. For devices that don’t have enough power to justify placing a 4G LTE chip, BlueTooth LE is a really interesting alternative. BlueTooth LE, if you believe the marketing buzz, will allow a device to transmit longer than the life time of the battery without requiring a battery replacement. Given the range of BlueTooth LE and the existence of the ultimate BlueTooth LE receiver – your cell phone, I can totally imagine a combination of BlueTooth LE talking transparently to your cell phone and your cell phone talking transparently to the internet to transmit the data.

The core objection I had to Internet of Things seems to be addressable with existing technology that is going to market right now.

Cool.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Synchronicity

July 10, 2014 by kostadis roussos Leave a Comment

For many different reasons, I finally got around to reading Beautiful Code.

Because Bryan Cantrill happened to be a classmate of mine, I started with his chapter.

The synchronicity is that in 1999 he was working on a problem related to priority inversion on Solaris. Intriguingly, at the same time, a friend of mine at SGI was also working on a similar problem related to priority inversion.

In effect, both companies had decided – independently – to build real time kernels and had independently ran into similar, although very different problems. At SGI we had to figure out to make reader-writer locks deal with priority inversion while not crippling performance.

What made this fascinating is that I tend to agree with Bryan when he talks about software being like math.

 

135651

 

Although I dread using that analogy because real mathematicians would raise an eyebrow….

The more important point is that once you decide to tackle an area in software, the reality is that the solutions tend to have the same attributes and the challenges tend to be the same, what makes software engineering is that the actual solutions differ greatly because they are embodied in a specific implementations.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Lessons on Mobile Gaming and Cross platform

July 7, 2014 by kostadis roussos 3 Comments

We have a skype chat where a whole bunch of the ex-Zynga senior technologists (we called them CTO’s) get to chat about stuff we learned.

One thing that came up was our attempts to come up with web-mobile-backend cross platform technology. As a key driver of this effort, I learned that this was a misguided attempt.

Because I like the immediacy of the dialog I am including the skype chat:

[] kostadis roussos: if I was going to enumerate my biggest professional mistake it would be to completely misunderstand the mobile gaming market need for specialized gaming engines targeted at specific games.
[] kostadis roussos: The reality was that I didn’t understand the importance of having a hyper-optimized game for a specific platform and that it was more important to have the BEST game implementation than leverage.
[] kostadis roussos: That leverage was basically unimportant – as a gaming company – … What was important was delivering the best possible user experience.
[] XXX: it’s a fundamentally hard problem to solve and i think what I took away is being polyglot is preferable
[] XXX: it seems like you’re really just talking about the failures of flash/html5 on mobile, not so much a “specialized gaming engine” since cocos2d / unreal / unity are all cross platform
[] kostadis roussos: @XXX – intriguingly in 2013 when we did a whole bunch of analysis 8/10 games had their own custom engines.
[] kostadis roussos: Maybe the data has shifted – and I’ll accept that notion but you really need to have a highly customizable engine where you are not giving up any performance or user-experience .
[] kostadis roussos: My point is that I completely misunderstood the centrality of that point. I didn’t get that we need to think of a game as CMS with a very optimized and specialized “rendering” system and that we needed to have the best one for any game genre or we would lose.
[ kostadis roussos: And so I pursued – html5 as a solution, flash as a solution  and then PlayScript language we built – all of these were attempts to solve the wrong problem. We should have been building the best UX system for I/E games in native languages with some C++ for the business logic bits that could be shared across multiple platforms.
[] kostadis roussos: Oh well. Lesson learned.

My only personal consolation was that I gave the prezi guys some good advice based on my learnings. They feel good about the outcome.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Zynga

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next Page »
 

Loading Comments...
 

    %d