wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

End of vSphere Standard: A Personal Tribute

October 30, 2025 by kostadis roussos Leave a Comment

gray conveyor between glass frames at nighttime
Photo by Tomasz Frankowski on Unsplash

The end of vSphere Standard on July 31, 2025, somehow passed me by.

Which is sad, because its existence was such a massive part of my life.

When I became the architect of vCenter, I saw it as an opportunity to make an impact on the world. Customers trusted VMware. And the world trusted VMware. So much of the world depended on vCenter and vSphere that I described the job as the most important job in IT.

I felt like I had been handed the keys to the kingdom and told, “Go figure it out.”


And -by-god- we did.

And the proof is 10 years later, when so many customers are mourning the end of vSphere Standard.

What was vSphere Standard? It wasn’t a bag of bits. Anyone can build a bag of bits. It was the commitment of the finest engineering team on the planet to take care of customers at all costs. At a cost even to VMware’s business.

VMware was relevant because of those 300,000 customers. It was those customers who made us irreplaceable. It was those customers that made us significant. It was those customers who allowed us to shape the direction of IT. Because we had that reach, we mattered.

Or at least I thought I did.

The number of things that we did to guarantee that vSphere Standard customers had a great experience at the expense of other customers was large.

I saw that trust as an obligation.

Folks would walk into my office and say, “Do X.” And I remember thinking, out loud and silently, that keeping our customers happy and never giving them a reason to leave was my first and only job.

And it wasn’t just me, but the entire organization. It was devoted to that customer base.

I feel a sense of loss to see the end of that relationship with those customers.

The customers aren’t small. They are real big businesses. They are businesses that relied on my team to do right by them. The idea that they are small is unfair to those companies that trusted us.

They are the guys who hugged us when we delivered the Supervisor, because we ensured he could keep his job and keep supporting his family.

When I see what happened, I feel a level of regret that maybe I shouldn’t have done what I did. Perhaps it was a mistake.

It wasn’t. It just shows you that change is inevitable.

And then I take it differently.

The outpouring of frustration from the customer base means that my team did right by you.

That for my mission: deliver stellar value, and for you to trust my team; I can declare: Mission Accomplished.

And I wish it wasn’t in your frustration that I found out.

Thank you for being great customers and trusting us for all those years.

And I do work at Nutanix 🙂 If you loved my work at VMware, you might find Nutanix worth checking out

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

the architecturalist 60: The day LeetCode coding interviews died.

July 19, 2025 by kostadis roussos Leave a Comment

I have written about the difference between generated software and created software, as well as how to interview a senior software engineer, in 57 architecturalist papers. Additionally, I have explored on how to interview a senior engineer.

So when Samuel Bashears wrote that a human barely built a coding agent at a LEET coding competition, I was thrilled.

This was the best news I had heard in ages.

It was about a tweet by Sam Altman:




When I first joined SGI in 1997, I took a class on how to interview. The presenter concluded that the best predictor of future performance is past performance.

The tech industry took a different direction, focusing on puzzles.

And so the LEET Coding Test began. In 2013, while searching for a job, I failed to secure a position because I was unable to complete the LeetCode coding questions.

Somehow, that didn’t stop me from becoming an architect of the most highly penetrated management software company and then turning that product around, so that the company was purchased for five times what it was worth when I joined.

And even while I was doing that, a co-worker was so annoyed at my perceived lack of coding skills that they anonymously trashed me on my blog and TheLayoff.com.

The notion that these questions are valid predictors of anything useful is an absurdity.

Worse, they cultivate a belief system that being able to do them is the essence of great software engineering.

It’s not.

Excellent software engineering involves understanding customer requirements, the limits of the software system, and how to engineer a solution that fits within the budget while gaining leverage for the next set of features.

Does that mean coding questions are off the table? No. However, there is a vast difference between building an optimal hash table and the kind of work that involves learning a large code base, figuring out customer requirements, and thinking through the possible places to improve the code to address those requirements.

Please find a way to incorporate that part of the interview process, but make it relevant to your work.

And if building optimal hash tables is what you do, then by all means, ask that question.

May this mania on Leet Coding go the way of the dodo bird.


Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers, Software

59 architecturalist papers: a new age for software architects

June 20, 2025 by kostadis roussos Leave a Comment

Agentic coding is poised to reshape software engineering and employment radically.

When I go on a stay-cation, I use some of the time to learn about a new technology.

Nine years ago, I wanted to learn Go, so I started programming and wrote a web server.

The code sat in a git repo.

And here’s an interesting experiment. I haven’t written Go in 9 years. I wanted to upgrade this web server to speak OpenAPI and use a SQLite database.

It would take me at least a month to finish the job.

What if I took this code base and asked Jules (because I had credits) to do the upgrades?

And yes, it was done in just one day because I was on vacation and managed to accomplish this while playing Civ 7, writing a book, going for a workout, spending time with my son, and having lunch with my family.

A significant portion of software development involves extracting data from a database and creating a REST API for that database.

Another significant portion of software development involves refactoring APIs to utilize newer technologies.

That work consumed a significant amount of time and energy from many people.

Now it doesn’t.

What does that mean?

Over the last 20+ years, we have seen too much focus on the ability to solve LeetCode programming problems, rather than on how to reason about large software systems.

Why?

The bottleneck in developing software systems was hiring enough people to produce REST APIs and refactor the code to use the next technology.

The productivity of software engineers constrains a software architect, and the ratio of software architects to coders was about 50:1. With this new technology, the ratio will change to 10:1.

Do I believe that there will be fewer software engineers who only write code? Yes. Will software employ fewer people? No. The productivity gain of being able to build more software systems will increase the pie. But the pie will look different.

If you want a laugh, here’s the repo https://github.com/kostadis/restserver

I won’t claim the correctness of what Jules did. My goal was to understand how it could address this use case.

Having a software engineer review the code and ensure it functions as intended is crucial. I was trying to see what it could do for what is a reasonably typical enterprise software use case.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Architecturalist Papers, Software

Open Facebook API or what to do about Facebook

December 28, 2019 by kostadis roussos Leave a Comment

When I left Zynga in 2013, I was convinced that Facebook was a malevolent entity run by leaders who could not be trusted. But I was also bitter about a 6$ stock price and my life choices.

Fast-forward to 2019, and it turns out that what I thought was just sour grapes, undersold the net harm Facebook has created.

An option that isn’t considered very seriously is the following simple proposal. Don’t break up Facebook, but regulate the access to and control of the friend graph and the ability to use the friend graph to publish information.

In 2012, when Facebook and Zynga stood off, the debate that was at the heart of the disagreement was ownership of the friend graph. Facebook believe they owned the friend graph and by extension owned how it could be used. We disagreed. In the end, we caved. I know this because I worked on the software systems necessary to create a parallel friend graph of people who were friends with other people who played Zynga games.

Facebook would love for us to spend time talking about breaking things up, instead of talking about the one thing that matters, a regulated open-api and regulated data portability.

Consider the messenger space. Because the friend graph is in my personal address book, it’s trivial to talk to several dozen different friends. Because the content is on my phone, typically pictures or documents, I can share anything with anyone.

Consider how many more messenger apps there are, versus how many social networks there are.

But let’s look to the past. During the failed MSFT anti-trust trial, a peculiar part of the agreement said that MSFT could no longer have private APIs, and that they had to communicate changes in a very specific public way.

This ruling enabled NetApp, which had built a reverse engineered CIFS server to survive and thrive. Because MSFT was losing the CIFS business, it also pushed MSFT to look for alternatives to CIFS, like SharePoint for document sharing and collaboration.

But over the long term, it enabled companies like Box and Google Drive and other file-sharing companies to emerge. Without the guarantee that a single man couldn’t break an API, a healthy and vibrant ecosystem in data storage has emerged.

If we had an open-social graph, and an open api, and data portability then I suspect that over time new social networks would emerge. Every social network would probably cater to different kinds of people.

In many ways Facebook does this today with Facebook Groups. For example, I happen to have joined two Facebook groups, one dedicated to old-school rpg, and another to 5E. The two groups hate each other. But because my social graph is portable, I can communicate to both groups within facebook.

Or we can even go back to Facebook’s origins. When Mr. Zuckerberg opened up the API, he promised it was going to be open and portable. He lied, of course, but not before Mark Pincus and Zynga figured out how to exploit the graph to grow Facebook’s business. Once, Mr. Zuckerberg figured out that owning the graph and how you communicate with it was very valuable, he squashed us like a bug. And destroyed the Facebook app eco-system.

Which brings me to regulation, we can’t trust Mr. Zuckerberg . Like we couldn’t trust Mr. Gates. And breakups don’t always work. Look at ATT, 40 years after the breakup, they control everything, again.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Facebook, Net Neutrality, Software, Storage, Zynga

Is the great cloud shell game over?

March 3, 2019 by kostadis roussos 3 Comments

With the success of VMC on AWS, it’s time for us to admit that the Cloud Native programming model as the dominant and only programming model for the cloud is dead.

The Cloud Native programming model was a godsend for IT and software engineers. The move from CAPEX to OPEX forced all of the pre-existing software that ran on premises to be completely re-written for the cloud.

Jobs, careers, and consulting firms exploded as everybody tried to go on this cloud journey.

It was like the great y2k rewrite, which was followed by the C++ rewrite, which was in turn followed by the great Java rewrite …

This rewrite was forced because On-Prem software assumed infrastructure that did not exist in the cloud and could not work.

On-premises, you have very reliable and robust networks, storage that offers 5 9’s reliability, and a virtualization infrastructure that provides automatic restart-ability of virtual machines.

Furthermore, on-prem you had the opportunity to right-size your VM to your workload instead of playing whack-a-mole with the bin-packing strategy known as “which cloud instance do I buy today?”

The problem with on-prem was that you had to buy the hardware, and worse you had to learn how to operate the hardware.

The operating environment for the cloud where networks are unreliable, storage is unreliable, and applications must be HA aware and integrate with HA aware systems required a full-rewrite of perfectly fine working software.

What motivated this mass migration and mass rewrite?

The motivation was that new software could legitimately be written faster on the EC2 PaaS. Furthermore, companies didn’t want to own the hardware and wanted to rent their infrastructure.

The two factors pushed companies to look at Cloud Native not as a way to augment their existing software assets, but a once-in-the-lifetime opportunity to rewrite their software assets.

But it turns out that is hard. And it also turns that the pre-existing operating model on premises is kind-of-valuable. Instead of every application having to figure out how to deal with infrastructure that’s flaky, it’s just simpler to have a more robust infrastructure.

And now that the cost and agility advantage of the cloud has been in-part neutralized, what I hope we might see is a collective pause in the software industry as we ask ourselves – not whether it’s on-prem or cloud-native, but what is the appropriate programming model for the task at hand.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, innovation, Software

The brakes have brains

February 13, 2017 by kostadis roussos Leave a Comment

Fascinating article about Bosch (https://www.technologyreview.com/s/601502/boschs-survival-plan/).

A couple of things that popped out:

  1. Factories are turning into computers. The interconnections between machines, originally a human task, is now a machine task. In 20 years, a human on the shop floor may be as ridiculous as humans swapping out transistors in an x86 processor.
  2. Data-driven optimization is getting faster.  A core fallacy of data-driven product design is that it can drive new products. However, the use of analytics can make existing products more efficient. The use of pre-existing wireless networks will allow devices to communicate with home base very efficiently, coupled with factory floors that can be optimized faster, this has tremendous implications on product life-cycle.
  3. Humans who rely on brawn or physical stamina are losing value fast.
  4. There is an interesting singularity when the entire manufacturing pipeline when 3d printing and data-driven design and fully automated factories intersect in a meaningful way. Factories will be able to retool instantaneously to meet instantaneous demand and insight.

The world of yesterday is going away so fast, the only question is whether we will survive to get there.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software

The archer and the gun and misunderstood impact of automation

December 29, 2016 by kostadis roussos 2 Comments

Last night I went to a great burger place in Arnold, called The Giant Burger. And I sat there waiting for my burger to arrive, I had a chance to reflect on the impact of automation.

The Giant Burger is not a fast place. It’s a place for great food. Not a place for getting great food fast. The reason is one of the employees will carefully assemble each burger to order. And because orders are big, and because she is one person, orders come out at about the rate of six per hour.

And as I was staring at her and thinking about machines, I was wondering what do machines do?

What machines do isn’t replace human beings. What they do is make less skilled workers more skilled.

Consider in the middle ages the archer. Being an archer requires a lot of skill and practice. You had to train from a young age and continuously hone your craft. In some sense, you could argue that archers were the artisans of war.

And then the gun showed up. And it wasn’t more reliable and more efficient than the original long-bow, but you could find 50 people hand them 50 fifty muskets and do almost as much damage as the archer.

In short, the gun made large armies of archers possible by reducing the skill requirement.

And that happens over and over and over again.

Look at the modern military drone. I can’t fly an F16 because I am too old, too tall and too fat. I could fly a drone. And there are more middle-aged fat guys than there are highly trained fighter pilots.

And so what happened?

We have drones all over the world killing random terrorists because we can have armies of fat people sitting in rooms flying a robot.

We have more people killed from the air than at any time since the Gulf War, and not a single pilot has done the kill.

Or look at the DaVinci system for surgery. To date, surgery was about skill. Surgeons were more athletes than scientists. With DaVinci, the skill necessary to do surgery will decline over time.

What automation does, what machines do, is they reduce the value of specialized skills and democratize those skills. And in the process make the value of the human labor declines because the number of people who can do the task increases, thereby reducing salaries.

And now software is making it worse. In the past, upgrades required new physical systems, now with software we can upgrade existing systems in place. And because of how electronics work, we can improve the intelligence of systems at the rate of 2x every 18 months.

And where it gets interesting is that in the past before software, mechanical systems had to be carefully engineered. For example, a mechanical lever has less tolerance than a computerized control system that can make micro-adjustments very quickly.

In short, we can innovate faster and cheaper than ever before in creating machines that make anybody be able to do anything.

Automation isn’t about replacing people; it’s about eliminating the need for skill and with that we remove the value of training and with that, we replace the highly trained archer with conscripts.

Which begets the obvious question:

So what?

Given that the value of skill is declining faster and faster, then that implies that the value of most human labor is decreasing, and therefore the per-unit cost of paying someone to do the job is below what people would accept.

And so when we say: Automation is killing jobs, what we are saying is that automation is causing the price we are willing to pay for humans to do jobs is decreasing.

And then we get to the policy prescriptions.

1. Some kind of universal income

One approach is to realize that there is a net surplus labor force at the current labor price, a price artificially kept high because of the minimum wage, medicare, food stamps, etc. And recognize that that group of people is going to have to die off, or leave the country for the surplus to get eliminated and in the meantime continue to extend those benefits including something like a universal income.

The problem is that that group of unemployable people is going to expand over time.

And the other problem is that there will be an increasingly shrinking set of people who will subsidize the lives of those whose skills have no value at the current price.

2. Make human labor competitive by retraining

This approach recognizes that it takes some time to build computer systems that can replace all skills and that the computer systems themselves need human operators. And so we continuously retrain people.

The challenge is that during retraining people are not employed and post-retraining the value of the labor is low. And so humans continue to experience points in time where they make less money and don’t have access to a stable income.

This also has the problem that the cost of the training has to be covered. And the folks who are making money will resent that their money is helping other people.

3. Make human labor competitive by lowering price and over time increase the price by reducing the number of people in the labor supply.

Another policy prescription is to cut those benefits such that the surplus labor becomes competitive with machines at a much lower price point, and then rely on other policies to cause the labor pool to shrink over time.

For example, a starving man will work for less than $7.25.

Cut his medical coverage, and a sick person will die off quickly.

Cut off his social security, and when he is too old to work, he will die of hunger and illness.

Restrict immigration and the number of people who enter the country will decrease over time.

The net effect will be that surplus labor will decline over time. In the short term there will be some pain, but in the long run, this will work out.

In the press, there is some discussion of the heartlessness of the tech industry because we create the machines that displace skill.

Tech is amoral. Our policy prescriptions are moral. If you are outraged with the outcomes of an amoral device, go ask yourself what policy prescriptions do you favor?

 

 

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Jobs, Software

Let’s start using Technical Leverage instead of Technical Debt

June 29, 2016 by kostadis roussos 3 Comments

Over the last year, I’ve been struggling with the term technical debt.

The theory behind technical debt is that there are choices we make that cost money later. And that’s motherhood and apple pie.

The problem with that phrasing is that there is an implicit assumption that technical debt is a bad thing because all debt is bad.

And that is just profoundly a wrong conclusion.

Debt is how you get leverage in the business, and it’s how you get leverage in time in engineering. And engineering is a tradeoff between time and resources.

More generally, because of the negative connotation of debt, the theory of technical debt says that:

Engineering tradeoffs aligned with business priorities are bad if hurt they architecture

And that is the wrong answer. Because if the business priorities result in growth and success, then this was the right tradeoff between time and technical correctness.

Engineers can use leverage to go faster, and like businessmen we can overdo it. And when we do — well there are consequences.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

How to build a product

February 14, 2016 by kostadis roussos Leave a Comment

There are two fundamental approaches to building products, a technology first and a customer first approach.

The technology first approach examines what is possible and based on what is possible builds something.

The customer first figures out what customer need is required and then builds something to satisfy that need.

I have had the opportunity to pursue both approaches in my career. And what I have observed is that they can both result in poor results.

The technology first approach can produce something that no one wants.

The customer first can produce a few deals that never grow past a certain point.

At one of my jobs, the GM had the head of product management and myself at each other’s throats. The head of product was very customer centric. I was very technology focused. And the GM would only approve a new project if we both agreed. Sometimes, the head of product would wear me down, and I would grumpily agree with his ideas. Sometimes, I would wear the head of product down and he would grumpily go along with mine.

And those ideas had marginal success.

The best products, the ones that were huge successes were the ones where we both could see how this would satisfy the customer need and that there was real technology innovation.

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Selling, Software

Scaling efficiently instead of scaling up or out.

November 11, 2015 by kostadis roussos 1 Comment

Over the last few months, I’ve been involved in a lot of discussions about how to make software systems more efficient.

When we look at making software go faster, there are three basic approaches

  1. Pick a better algorithm
  2. Rearchitect software to take advantage of hardware
  3. Write more efficient software.

From about 1974 when Intel introduce the original 8080, up until 2004, conventional wisdom was that writing more efficient software was a losing proposition. By the time the more efficient software was written, Intel’s next generation processor would be released improving your code’s performance. The time you spent on making your software go faster, represented a lost opportunity to add features.

As a result, a generation of software engineers was taught of the evil of premature optimization.

Textbooks, and teachers routinely admonished their students to write correct code, and not efficient and correct code.

Starting in 2005 with the shift to multi-core processors making software go fast was about taking advantage of multiple cores.

Software developers had to adapt their systems to be multi-threaded.

At the same time, software developers noticed that the number of cores per system was limited and to get ever increasing scale, they had to be able to leverage multiple systems.

And thus the era of scale out distributed architectures began.

In this era, software engineers had to create new algorithms and new software architectures, and writing efficient code was still not viewed as an important part of delivering ever faster software.

See from 1974 to 2015, the name of the game was to use more and more hardware to make your software go faster without any consideration to how efficient the software is. From 1974 to 2004, you just waited for the next processor. From 2004 to 2015 you re-architected your software to take advantage of more cores and then later to scale out to more systems.

And by 2012, writing large scale distributed systems was easy. A combination of established frameworks and patterns made it easy to build a system that scaled to hundreds of systems.

Software engineering had discovered the magic elixir to ever increasing performance. We could harness an increasingly large number of systems combined with multi-threaded code to get infinite performance.

If the 1975-2004 era made writing efficient code of dubious value, the scale-out age made it even more questionable because you could just add more systems to improve performance.

High-level languages, coupled with clever system architectures could make anyone deliver an application at scale with minimal effort.

Was this the end of history?

No.

It turns out that large scale-out systems are expensive. Much like processors hit a power wall, massive data centers that consume huge amounts of energy are expensive. And companies started to wonder how do I reduce the power bill?

And the answer was to make the code go more efficiently. And we saw things like Hip-hop emerge, and Rust. Hip-hop tried to optimize code. Rust tries to provide a better language for writing efficient code. And in parallel we see programming languages like Node.js and Go become popular because they allow for more efficient code.

Software efficiency has become valuable again. The third pillar of software performance after a 40-year wait is the belle of the ball.

And what is interesting is that the software systems of the last 40 years are ridiculously inefficient. Software engineers assumed hardware was free. And because software engineers made that assumption, large chunks of software are very inefficient.

The challenge facing our industry is that to improve the efficiency of software we will either have to rewrite the software or figure out how to automatically improve performance without relying on hardware. No white knight is coming to save us.

And we are now looking at the world where performance and scale are not just going to be a function of the algorithms, and the architectures, but of the constants. And in this brave new world, writing efficient and correct code will be the name of the game.

We will not only have to scale out and up, we will also have to do so efficiently.

Put differently, perhaps there is no longer such a thing as a premature optimization?

 

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

  • 1
  • 2
  • 3
  • Next Page »
 

Loading Comments...
 

    %d