wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

The Universal Translator and Thank You Google

November 23, 2014 by kostadis roussos Leave a Comment

startrek-universal-translator

 

An old friend and his family came to stay at my sister’s house in Santorni

That you can stay at too and should!

What was really amazing was that my friend and his children only speak English and my cousins only speak Greek and some broken English. And yet they were able to communicate and become friends. Because when they couldn’t explain things to each other, when the limits of their shared understanding came to fore they relied on the 21st century universal translator:

2014-11-23_0929

 

The world is becoming a smaller place because some companies are trying to change the world…

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation Tagged With: Google

Things are getting better with Data Science and our Data

November 21, 2014 by kostadis roussos Leave a Comment

Yesterday I wrote about how the failure of companies to respect the privacy and happiness of their customers posed an existential threat to the entirety of services that relied on big data.

Some folks on twitter remarked that my Data Scientist Hippocratic Oath is what their companies live and breathe.

@mattocko@kostadis_tech at LinkedIn statements like that oath were part of the company values, which were drilled into you every day.

— Peter Skomoroch (@peteskomoroch) November 20, 2014

And that’s great. I think that protecting user-data aligns with being a great company… And I think a great company sometimes may need to be explicit about how it thinks about user data.

Juliet asked how does this apply when the customer isn’t a person? I guess we need to refine the oath to be a little bit more specific – instead of customers we should talk about people.

  1. I will do no intentional harm. I will not knowingly manipulate people to be unhappy or sad or miserable without their explicit clear and obvious consent
  2. I will never use our data in ways that are not aligned with the customer needs of the person whose data this is.
  3. The company is not the customer, and if I must choose the customer needs p person whose data this is over the company I will always choose the person. My job is to protect the user’s data not the company’s survival

And while I called out uber and facebook in my post, it’s only fair to share with folks that Facebook has been working to create a better code for it’s data science efforts described here and that the folks at Uber have hired an outside team to look at their approach to privacy.

I believe the self-interest of companies with vast amounts of data that we want to be kept private will ensure that the data is private because if it isn’t we will have encryption deployed everywhere. And I am delighted to see that happening.

And we’re seeing that. We, as users, need to demand that kind of data protection. The alternative is what happens in medicine where data is so regulated that our ability to fight diseases is being impaired.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Security Tagged With: Big Data

A modest proposal for a Hippocratic Oath for Data Scientists

November 19, 2014 by kostadis roussos Leave a Comment

Over the last two years, two troubling incidents from large companies have demonstrated the challenge facing modern companies that rely on large amounts of aggregated data to provide a compelling service.

The first was Facebook’s usage of data to test how people behaved when they were made upset.

The second was Uber’s threat to use information about what we are doing with the service for less than admirable reasons.

In both cases, customers had – unknowingly – handed over vast amounts of private information to a service provider that could use that data in ways that didn’t align with our best interests.

Although it’s easy to demonize Facebook and Uber, let’s not. Big Data is an industry that has exploded, and we are all collectively learning how to behave. Every industry has gone through a maturation phase. And my goal is to suggest a path for good because I believe in the value of these services.

The danger for service providers is twofold.

The first is that customers will begin to distrust the service providers and stop using the service. As a very private person about certain very personal topics, Facebook’s confusing and changing approach to privacy has made it difficult to use the service for topics where privacy matters. This switch from early on in my life with FB, where its commitment to my privacy was why I used the product.

The second is that customers will demand or gravitate to services that provide privacy. As a result, the ability to use the data to deliver better and more personalized services will be hurt. This is more likely. And what is more likely is that incumbents will provide that security. And almost as if to prove my point about this, WhatsApp now has a fully encrypted messaging channel. With fully encrypted messaging channels – SPAM becomes a much harder problem to solve, and ad-supported free mail services become much harder to monetize – this is not necessarily a better world for consumers…

What needs to be done?

I believe that people are decent. The world works not because we police it but because decent people know what they should do. We have morals and standards, and decent human beings gravitate to them.

With Big Data and the power to know more about your customers and manipulate your customers in ways that are not always aligned with their wants and desires, the danger for unintended evil is great.

Much like some medical experiments are not done because we think they are wrong, some data uses are wrong.

I don’t think regulation is necessary at this time. I think regulation will hurt the industry. What I do think is needed is a Hippocratic Oath for Data Scientists. If we agree on what is acceptable behavior, then most decent people will behave in the right way, and then the bad ones will be easily identified as bad actors.

More laws won’t protect us from bad people; decent people knowing what is right will protect us.

In that spirit, let me offer an oath.

  1. I will do no intentional harm. I will not knowingly manipulate people to be unhappy or sad or miserable without their explicit, clear, and obvious consent.
  2. I will never use our data in ways that are not aligned with the customer’s needs.
  3. The company is not the customer, and if I must choose the customer’s needs over the company, I will always do so. My job is to protect the user’s data, not the company’s survival.

I suspect if those three rules existed and we relied on the basic decency of human beings, the recent justifiable outrage would be significantly more muted because the things that did happen would not have.

We are a new industry, and as a new industry, we are figuring things out, and figuring things out means making mistakes.

Let’s take this opportunity where we made mistakes to make things better.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Some performance benchmarks of browser languages

November 18, 2014 by kostadis roussos Leave a Comment

Read this really interesting performance analysis of various browser programming languages

Two things that come out of this.

The first is the power of meta-programming languages like Haxe. Using a better programming language and a typeless goop as your target machine-language can deliver great performance. Not entirely surprising – of course – because hardware is un-typed 🙂

I first encountered Haxe four years ago as an answer to my prayers around the mess of Php and Javascript inflicting typeless chaos on the world of web programming.

We actually used Haxe at Zynga to write a renderer if memory serves me write and got better performance than we could have if we had written it in entirely in Flash.

The second is how underwhelming the performance of emscrimpten is compared to Haxe when you don’t have asm.js. There was a time before the mobile web died that I though that C++ to JS would be important. Now that it’s clear that the mobile web is deader than a door knob, the value of Javascript as a front-end mobile programming language is trending to zero fast.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Zynga Tagged With: Benchmarks, C++, Emscripten, Haxe, Javascript, Languages, Php, Web Browser

The systems disruption in networking

November 16, 2014 by kostadis roussos 4 Comments

As I spend more time learning about Networking, I begin to realize that there are significant disruptions happening and they are not as obvious as “SDN” or “Open Flow”.

One of the most significant differences between a networking device and a server is that the data path stays entirely inside  of a custom ASIC instead of running on a general purpose multi-processor.

Why is this important?

When a customer buys a device, what they are buying is the data path, the rest of the product is a second order purchasing decision. In a networking device, the data path is entirely in the hardware, and therefore – although software is important – it’s significantly less important than the ASIC that processes the packets.

A networking company – for all of the discussion about software – is a hardware company that builds an ASIC that processes packets. They live and die by the success of their hardware and by hardware I mean ASIC.

Historically a networking company built it’s chips and sold a device.

The disruption that is happening in the networking space is that – increasingly – a large chunk of the networking ASIC’s are being built for increasingly important segments by vendors that sell to multiple networking companies.

This is very similar to what happened in the late 90’s when Intel started killing off all of the custom microprocessors in the server market.

A simplistic and silly view is that as a result of this disruption is that everyone is going to go and build white-box switches and routers all of a sudden. For some customers wh have the depth of skill and expertise to build their own networking gear this will happen  but for the same reasons I am bullish on storage I am bearish on this happening broadly. The majority of the market will continue to buy gear from networking companies.

The disruption that Intel created in the 1990’s was that companies like SGI that built everything soup-to-nuts had to suddenly go from hardware companies that had software to a systems company.

Customers used to buy SGI hardware to get access to the massive number of processors or the high end graphics engines and they didn’t really care about the software running on them too much. The selling point was the hardware.

As the number of vendors that could deliver the hardware that SGI produced customers went from buying whatever hardware SGI produced running whatever software SGI put on the hardware, to thinking about the whole system and that created more options for the customers.

In effect, the decision criteria went from being what SPECint or SPECfp a MIPS processor had to what is your TPC-C and TPC-D number and whether you were a key partner’s of Oracle.

So what is a systems company then?

A systems company delivers a device that is balanced between hardware, software and packaging. The balance of the three components creates a unique selling proposition that the customer and market is willing to put a premium on.

Systems companies emerge in markets where the hardware is supplied by a small number of vendors and differentiation is created in how you combine the hardware and use the hardware in software and create a package that the customer can buy.

The challenge for systems companies is that sophisticated customers can build the same product from the same components and this creates a pressure to innovate in lots of areas outside of the core components.

The challenge for a hardware company as it makes the shift to a systems company is that the mindset of how you build a product has to change. Whereas in the hardware centric world – the hardware is built and the software guys have to figure out how to make it work, in a systems world there is a fine balancing act between the two. And whereas in a pure hardware world the hardware performance was the be-all of your differentiation, the systems company has a combination of attributes in software, packaging and hardware that create the unique differentiation.

In fact, perversely, how you build software and the easiest way to build software becomes more important over time than any specific hardware platform. And the choice of components becomes more important than any specific component.

The challenge for networking companies is that system design is not where their core focus has been over the last 20 year.s

This change in how you build networking devices from – here’s an ASIC with software to here’s a complete balanced and well integrated system is going to be disruptive to how companies do business.

Now that I said all of that, let me observe, I don’t think my employer, Juniper, or Cisco are in danger of getting disrupted per-se. I just think that the way systems will get built is going to be very different and that that change is going to be very interesting.

And that in my mind is where the real disruption in networking is happening, everything else may just be noise.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation Tagged With: Disruption, Networking

The bullish case for storage vendors in a world of free cloud storage?

November 15, 2014 by kostadis roussos 1 Comment

The arrival of free consumer storage was predicted a while ago. As more market entrants arrived and as the complexity of delivering online storage declined the value of just storing bits and bytes would naturally go to zero.

The question was what would that mean for the market as whole?

Aaron Levie, probably vastly ahead of the curve, figured this out and has repeatedly said that he’s moving up the stack in terms of capabilities.

The “race to zero” in cloud storage: “We see ourselves really in a ‘race to the top.’ And what we mean by that is, we are in a race to constantly add more and more value on top of storage, on top of computing, to deliver more capabilities, more kind of industry-unique experiences, greater depth of our platform functionality to the industries that we’re going after….If people confuse us with being in the storage business or whatever, that’s a cost of just the fact that there are just so many companies out there, there’s so much going on on the internet. ”

Read more: http://www.businessinsider.com/box-ceo-aaron-levie-qa-2014-11#ixzz3J5i3xZLg

 

The recent partnership between DropBox and MSFT is probably an example of how early entrants most current valuable asset is the large number of customers they already have. This “race to zero” is not just a problem for the Storage as a Service vendors, this problem of zero cost storage is a problem more broadly for all Software as a Service  vendors. Customers will expect to store arbitrary amounts of data within a SaaS  and not pay for the storage. This will include back-ups, archives etc of their data.

The challenge facing everyone in the SaaS market though is that although storage is  free to the end-user, there is a non-trivial cost to buying and managing storage and that the cost of the management and support and maintenance of that storage will increase over time as more customers use that storage. And worse, the importance of the storage will increase over time as more customer depend on the data existing and being accessible over time.

The paradox facing SaaS vendors is that storage is free, but data is priceless and to store data you need storage! And so they will have to invest in their storage infrastructure, the question is how.

In the first phase of SaaS, vendors could legitimately build their own storage infrastructures. The ability to monetize the storage as storage made it possible for the SaaS vendors to hire and invest and innovate in storage infrastructure. And innovation was necessary because traditional storage infrastructure simply did not meet the requirements that the new SaaS customers had either in scale or in deployment type or in cost.

What this means is that if you are a SaaS vendor and the current model of building your own is the only model means is that you will have to become over time become an increasingly great storage company to meet your expanding business needs even though that is not where you should be adding value. This will require hiring a slew of technology experts to solve problems that is fundamentally outside of your core business.

And given that there are more SaaS companies than storage experts this will both drive up salaries in storage experts AND …  also create a market for storage infrastructure services.

Why will it create a market? Because hiring will be hard, and companies will look for products that meet their needs and the dollars will be big and entrepreneurs will try and satisfy that need.

And what will eventually happen is that SaaS companies will just buy storage infrastructure from infrastructure vendors who will be able to make bigger and deeper investments than any single SaaS player. A single storage vendor can leverage his storage investments across more customers and can make bigger investments than any single SaaS players. And SaaS players will naturally gravitate to those vendors that can provide a high quality storage solutions. My theory is that sooner rather than later SaaS companies will focus on leveraging technology built by storage infrastructure vendors instead of rolling their own.

Why will they buy instead of build?

Storage vendors are better suited to deliver low cost reliable storage because they have deeper understanding of supply chains and a deeper bench of people working on the problem of making storage efficient than any single SaaS provider does. In effect, a small number of storage vendors supplying a large number of SaaS vendors will provide more cost-efficient storage than every SaaS vendor building storage infrastructure on their own.

But it’s all about the people.

Look even if you don’t buy anything else I said, let me leave you with this thought: Hiring a bunch of guys who know storage is hard. Retaining those guys is even harder. It’s much easier to buy a product from someone whose job it is to hire and retain a storage team, especially if the storage is a product you use – not something you directly monetize. A product you buy is a cost that is easier to control – you can spend more faster and cut spending faster with no real impact to the quality of the product over the long term. A product you build is a team you have to hire and layoff when you want to manage costs and that will impact the quality of the product over the long term.

This is huge

The last 7 seven years of my career, I have been rather bearish on the storage infrastructure business. Ironically because storage is not monetizeable but very valuable and doing it cheaply and efficiently and reliably is very important to any SaaS vendor, and because it’s impossible for every SaaS vendor to do it properly, and because the number of SaaS vendors is going to explode, I believe for the first time in ages that Storage sold by infrastructure vendors to SaaS players is going to grow very rapidly …

So I am bullish on storage

And I suppose as someone who once declared that enterprise storage is dead, that is a surprising conclusion.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Software

Some observations on DevOps?

November 3, 2014 by kostadis roussos 2 Comments

At Zynga, I had the extraordinary good fortune to lead an extraordinary team that later on in life we might have called DevOps.

We lacked the vocabulary that exists today, and we certainly lacked the insight that many people later in life have created.

Instead of trying to explain what Zynga did, I thought it might be interesting to discuss what drove us to create a function that a lot of people call Dev Ops and we called systems engineering.

First, some observations.

Any web scale application has two parts:

  1. Application code
  2. Platform code

And the whole discussion of DevOps hinges on the definition of Platform.

In a traditional Web App, circa let’s say 2003, the platform was a LAMP stack. Deploying the platform was easy, debugging the platform was easy.

In 2008, as web apps routinely started to scale to millions of DAU, and exceeded the performance of a single node, platform innovation became de-rigeur in the industry.

We had to innovate at every layer from how the database is built resulting in a proliferation of no-sql and SQL databases, to messaging, to languages, to monitoring and to development tools.

Layered on top of this innovation was the emergence of programmable infrastructures. In this day and age of AWS, the way the world was in 2004 may feel very alien. You didn’t have large pools of servers connected with very large flexible networks that you could just transform with software packages into whatever kind of database, storage, compute node you wanted. You had very specific boxes, that had to be cabled and racked and stacked. There was very little dynamic flexible infrastructure outside of perhaps the Googleplex.

Because the platform was under such a radical burst of change, and because of the programmable nature of the infrastructure itself, the basic relationship between developers and operations changed. Whereas before operations created a fairly boiler plate platform infrastructure that developers programmed to, now developers were creating a platform that in turn application developers programmed to.

At Zynga the platform and the application were often delivered at the same time. As we needed new features, we delivered new platform capabilities to our game developers.

In this environment, the platform developers could not just very well hand off the operations of new platform components to an operations team that had no clue how to manage the new platform pieces.

And as a result, developers had to assume more responsibility for operations than they had in the past. Because the developers tended to have more sophisticated programming skills than the operations teams, the operations tended to involve significant more automation and programming than what had existed in the past. And as a result a new kind of tools began to emerge.

Systems Engineering – at Zynga – was the peculiar merging of platform operations and platform development. Or what we might call DevOps. The core goal of the systems engineering group was to minimize application developer impact of the changing platform.

What I observed is that as platform components became standardized, the need for DevOps for those components to be managed by a pure DevOps team declined and those components transitioned to a more traditional operations team. And in fact, most DevOps teams worked really hard to get out of the DevOps business by enabling operations team to manage their infrastructure.

We did blur the lines between Operations and Development, but that was out of necessity not because it was efficient. In fact, we discovered that having highly trained and expensive engineers doing operations was a waste of time and money. Similarly having weak programmers writing complex software systems rarely got the result you wanted. The specialization and the separation of roles was a good thing and the blurring of lines was a good thing.

For me, DevOps is part of a complete engineering organization at scale: application developers, platform developers, dev ops, and operations are required to win.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Physics and Computer Games and Big Data

October 30, 2014 by kostadis roussos Leave a Comment

55794177

Over the last 15 years, there have been two useful heuristics for figuring out where computing is going.

When I want to look at how applications are going to be built, I look at games. After all games are at the forefront of creating new kinds of digital experiences, and the need to push the boundaries of how we entertain ourselves is a crucial to create new revenue and sales opportunities.

When I want to look at how infrastructure is going to change, I look at what people want to do in the super computer space.

Two nights ago, I had the marvelous opportunity to hear a talk that was a discussion of Physics and Big Data. As a software infrastructure guy, at the end of the day I like to think about how to build systems that enable applications, I have been wondering if Big Data was going through a bigger-faster-stronger phase or whether there were new intrinsic problems.

And the answer is yes to both.

Clearly we need systems that can do more analysis faster, store more data at cheaper costs, etc.

What was not as obvious was that exponential increase in transistors coupled with the disruptive trend of 3D printing was going to enable:

  1. A proliferation of very sensitive distributed sensors that need to be calibrated and whose data needs to be collected.
  2. The ability to find even weaker signals in the data.

In effect, we were going to be able collect more data faster and because of that we will be able to find things that we could not find before. And solving 1 and 2 on it’s own are very interesting problems that can keep me busy for the next 10 years …

However there are some new problems that come out of that:

dn1xs

We will need to be able to find new ways to explore data and track our exploration through the data.

dn1ux

 

We will need ways to combine the datasets we create. After all as more and more sensors get created, and sensors get cheaper, the ability to combine data sets will become crucial. And as the scale of the datasets grows, an ETL becomes less realistic.

And then to make this all more interesting, there is some thought that the way we collect data itself may create signals and that meta-analysis of the data will be required. And how you do that is an interesting problem on itself. And how do you create systems can correct for that…

My head has sufficiently exploded. Turns out that just making things go faster isn’t the only problem worth solving…

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation Tagged With: Big Data, Computer Games, Physics, Super Computers

Hardware not Software is Eating the World

October 30, 2014 by kostadis roussos Leave a Comment

Andreesen made this amazingly brilliant comment that “software is eating the world” and it’s been ringing in my head for two years.

The idea is that software is doing more and more things in more and more places changing and restructuring the world as we know it.

And I agree.

My quibble is that software is actually the side effect of the real world eater and that is hardware. The relentless increase in the number of transistors at exponentially decreasing cost is truly the world eater.

The fact that we have these machines being added at this relentless pace is what is making it possible for software to do its thing.

If we consider a transistor as a machine, we are seeing an exponential increase in machines being deployed every year. And that exponential increase is making it possible for software to go deeper and deeper into our lives. And that exponential increase is making it possible for software to do more.

We have gotten so used to that increase that it almost seems natural to be drowning in transistors.

What is making this mobile cloud transformation possible is the explosion in the number of transistors per person.

Freemium became possible because the cost of compute per person became zero.

What guarantees another equivalently large transformation in 10 years is that the current sea of transistors will be only 3-10% of the total transistors deployed in 10 years. What appears to be an infinite computational infrastructure today will look puny and irrelevant in 10 years.

This relentless increase in transistors is what makes this industry so much fun.

And although I am a software guy, it is the hardware that is eating the world…

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Is this really about Google Docs?

October 27, 2014 by kostadis roussos Leave a Comment

With respect to MSFT today, Google and Box have been offering free unlimited storage for a while now. The difference is that Google and Box required you to spend about more than 40$ a month for a business plan.

My theory is that this latest announcement is really about Google.

I am a happy user of Google products. Even though Google Docs are less capable than their MSFT counterparts their freeness makes up for that.

Now that my storage requirements are starting to push me to pay for a larger storage plan, I am shopping for an alternative.

And MSFT new pricing for storage is going to make me look at MSFT for the first time in ages.

My theory is that MSFT is figuring that there are enough people like me who when confronted with the choice may buy the MSFT plan because it’s cheaper. This may help slow down the growth of Google Docs.

DropBox may be collateral damage.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Storage

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next Page »
 

Loading Comments...
 

    %d