61 Comments

In a 40-hours work week filled with depressing code-related absurdities, I can always hop on here for a quick dose of dopamine and a reminder that things can, and should, be better - and that I'm not crazy for thinking that.

Expand full comment

Loved the video.

There is another common excuse though, I often hear when people talk about doing performant code that they are doing premature optimization, that you should only care for it when it becomes a real problem and you actually need to rewrite everything. (Even in a high demanding area such as game development)

I wonder what you opinion in this matter is? When is too early to do a performant solution? Is there really such a thing as premature optimization? If so is there a better approach to make our life’s easier so we don’t have to rewrite everything every time our solution is not performant enough?

Expand full comment

You will be happy to know there is an entire video on the way about that very (also absurd) excuse.

- Casey

Expand full comment

Looking forward to it :)

Expand full comment

Good on you, because I'm already seeing comments making this claim...

Expand full comment

I literally got a comment in some youtube video with this exact talking point just 4 hours ago.

Expand full comment

I'm way less experienced than Casey and some of you. But I've experienced wasted work on 'optimization' many times. I don't agree with the general narrative of "premature optimization", but in my experience, there certainly is a cost to optimize something before you had to. I.e. if you're specializing your code to optimizing a very specific use-case before you know its relevance, then when you'll need to change it, you'll need to do extra work to unwind that optimization work to become more flexible again - whereas if you instead put simple code that wasn't pessimized and didn't throw away flexibility, you would have decently performing code and require less work when needing to change.

E.g. if you created code similar to the final optimized version of the polygon math in "Clean Code, Horrible Performance" but no longer needed the area function, but instead needed circumference, it'd take some extra work to do that, compared to if you had just started with the tagged union that Casey started out with in that video. Albeit it's not the best example since there's no state handling or other side-effects that is the stuff that usually is harder to unwind. In other words, optimizing cold paths and ephemeral parts of the code prematurely is indeed a waste of time. But I don't sense that's what the term typically is used to describe.

Expand full comment

This is probably the number one argument against writing code in a performance aware way, "It's premature optimization" as if putting a bit of thought into performance on a first pass is a cardinal sin.

Don't get me wrong, don't obsess over performance on your first go at a problem, but putting a bit of thought into your data model and some of your data structures won't hurt either. If there's one argument against this, it's probably the one that goes you should measure before optimizing as what you think is slow is probably wrong.

Expand full comment
Comment removed
Apr 27, 2023
Comment removed
Expand full comment

I believe the main issue is that languages / frameworks / libraries obfuscate what it is doing in the background, so programmers threat everything like it is a black box.

Expand full comment

I half disagree. The tooling does suck, but the human aspect of it I think is just as big, if not bigger, of an issue. Here's what I mean.

Having ClippyGPT, god help us, say things like "Hey I see you're trying to use this list as a set, consider switching so it goes faster?" or "Hey, I see you're trying to do an N^3 triple loop, do you really need that?" will certainly help. But you'll still need people to understand what they're doing and care. Clippy can get bent out of shape about their slow code, but if they don't care it won't matter. I've spent good portions of my day helping coworkers rewrite their overly complicated code into simpler and faster code, this had nothing to do with tooling and everything to do with the person writing the code.

As for the tooling being bad, I agree. For a variety of reasons, Python was the logical choice to use due to the project requirements. But I am now between a rock and a hard place trying to get rewrite portions of the code to run in parallel. I know EXACTLY what I would like this program to do, but I just can't contort this snake into that shape.

Expand full comment

Completely agree with everything in the video. Personally I've seen quite a bit that a company want each service to be responsible of a single thing, with a database for each one. So to fulfill a simple request I have to go through a call chain of services, each call goes though the network, and to a lot of databases queries. And that just keeps going, adding more and more services for each thing.

Also another one is that to fix the issue above, they put Kafka or something else to allow failures of those services. To the point that a request to do 10 things will be split in 10 Kafka messages (actually what I see is from 10 to 100) to be consumed. And the sole reason to do that, is to make the system more "resilient".

I'm not sure if it is only me, but that sounds really bad to me. Since the probability of an error increases a lot by doing it so complex. And then, you "need" all those machines to be able to handle the load, which is no more than a 1000 request per min.

Expand full comment

Not to mention how much harder it is to find the root cause of a failure when dozens of microservices are involved!

- Casey

Expand full comment

Every project I ever worked during my 12 year programming career that had even a single user, included a lot of work on performance.

Only the last 3 years were in performance oriented domain.

Expand full comment

This video was great Casey! Going through all of those public examples was super interesting.

To add my own performance anecdote here: I used to work for Facebook and there was a huge internal effort to make things more performant. I remember literally multiple efforts to reduce cpu usage in binaries that ran on the entire fleet by as little as 1%. When you multiply that 1% by 5MM servers, the cost savings are pretty crazy.

I even did work to reduce bloat from code generated by their Thrift compiler. 100s of Go binaries went down in size by multiple MBs, and that stuff starts to matter when you’re copying executables around for deployment (sometimes many times a day)

Maybe part of the reason developers tend to make these excuses for not caring about performance is this disconnect between their software and how much it costs to run.

Expand full comment

I can tell from personal experience that performance in web frontend can be absolutely critical for customers in at least 2 companies I have worked at. In the second company this happened:

> Webapp is slow and everybody is blaming the frontend framework and talking about rewriting in a new one

> I finally sat down and figured out that the problem was actually a slow memory leak. The javascript heap grew by 70MB or more if you left specific features of the app running for more than 2 hours. And for this app running for 2+ hours was a common usecase. After this happened GC hits increased and everything went badly in general.

> I fixed the problem. Found many other things I could fix in the course of profiling for those things. It took me months.

> Finally merge my changes. Within the week a surprized looking CEO says, "the app has been under load for a while and there are no complaints in the customer-duty channel. Did we really get that much better?" 😂

This was a turn around from a total gloom and doom situation that had been going on for months where it looked like the app will just not scale. And I think it was all because of performance improvements. Although I cannot rule out the the possibility of sales team doing something Im not aware of, I know that literally everyone liked the app much more after I merged those changes.

Another time, in another company there was a form which had multiple tabs and each tab and a bunch of ng-tables. This form was soooo goddamn slow that you would click on a checkbox and get coffee (literally). The product manager who was in charge of that part wanted it to be made faster but nobody thought it could be any faster (it had been that way for a year or more). I said something along the lines of "ill try to make it faster but no guarantees". Turns out the thing was just a monstrosity of useless libraries being used to do very simple things that could be done with no libraries. Removed them all over a weekend, the form became just like any other normal web form. They could not believe it. And I gained a lot of reputation :P

These are just two times I can remember when performance was the thing I worked on in web frontends. And both times I had to do it with the understanding that I am doing it for my own sake, while others talked about a "full rewrite". And then when I had just a little bit of success everyone just relaxed and then the "optimization" effort was stopped right there.

I think the issue is that on the web perf doesn't matter until the app is unusable. Once it starts becoming unusable, it only matters till it becomes usable again. And despite so much tooling being available for performance tuning, I don't know of many people who actually understand how to use them.

Expand full comment

On top of everything you have said so far about why "performance aware" programming is important I have a few others (which can be seen as more personal choice)

- Help the environment by allowing your software to run in older hardware instead of blindly forcing your users to update because your software is so badly constructed.

- Caring about the craft in itself, and being proud of what you do for a living, by always trying to deliver the best results with the resources you have at hand.

- Respect your users/client and dont deliver a bad product just because they dont know any better.

- Not assuming writing "clean" code will somehow make your life any easier down the road. I have work in big teams following those programing mantras and the results were always the same. Poor performant code that was horrible to modified because of all the layers of abstraction you needed to learn (and relearn) before touching anything. That also applies to finding and fixing bugs.

- Writting decent code is not slower than writting poorly design code. For any non trivial task, understanding the problem and comming up with a solution is what takes most of your time, unless you type really slow.

Expand full comment

These are sort of more "philosophical" angles, but in general, I agree.

- Casey

Expand full comment

Great video. Could you explain however why do you disable comments on all YouTube videos on MollyRocket channel? There seem to be some people, especially ones who are not associated with you, that take disabling comments especially on those types of videos as avoiding discussion.

Expand full comment

Because I don't find YouTube comments to be high-quality discussion. They are nearly all low-quality, poorly organized, poorly researched, and mostly contribute no new information to the discussion. If people want to discuss this issue, they should take the time to do real work, and produce full articles, papers, or videos with real data that debate the points I make. That is what I do, and they should have to do that if they want me to take their points seriously.

- Casey

Expand full comment

I bought a subscription to an online accounting software. From week one, I was trying to import the opening balances from the previous software (less than 1k rows with three columns csv), it takes five minutes to render the UI for the rows every time I paste the rows. And I had to do it many times because I was still figuring things out and as a result I wasted hours on it.

Another problem, when I'm typing in a dropdown to select an account, it was doing a request to the server with some kind of cooldown, so I had to stop typing and wait for it to get the list of accounts that matches my search, the previous one was instantaneous because the accounts were downloaded and stored in the client.

Every time I click a button, create something, hit next or click the next page I have to wait for the page to load, sometimes it is half second and sometimes it is 5 seconds. It is ridiculous. The old one is also web but has windows that I can open and move around and there is no page reloading whatsoever, it almost feels like a native app.

Later that week, I bit the bullet, accepted the cost and stayed with the old one.

So, yes! Performance was the most important thing in my case! (even though the old one could be made faster).

By the way, the old software is written by me and I always felt it sucked and I thought switching is going to be amazing. Now, I dedicate small amount of time every week to improve it and I'm never switching again.

Expand full comment

In the course will you talk about network architecture, batching requests, how to scale, manage memory, conserve battery life, and all those things you mentioned at the end of the video?

Expand full comment

Yes of course! But that's at the end once we have covered all the things you need to know to understand those effects.

- Casey

Expand full comment

Nice, thanks!

Expand full comment

So, the companies mentioned in your email didn't prioritize performance in their initial application versions because, at that time, they:

- Didn't need it

- Had sufficient hardware

- Found it more cost-effective to develop

- Could develop more quickly

- Released features rapidly

Only after years did they decide to refactor proven features for improved performance in production! :)

Discussions about the need for performance typically revolve around balancing development time and convenience against hiring skilled engineers to create fast, high-quality software from the start. This generally pertains to the first version of the software, rather than the 4th or 8th versions, when features developed quickly with a focus on speed-to-market have already generated revenue and a user base that justifies their existence.

Indeed, performance may not be a top priority from the beginning.

Furthermore, it's important to stop criticizing developers who don't prioritize performance and start discussing the realistic impact of creating a performant or non-performant app based on its purpose:

- For games, calculators, or terminals: Ensure the app meets the required frames per second and responsiveness, then release it to the market, since there is NO impact that would depend on of 120fps or 8000 fps you have

- For IDEs: Identify the target audience and average project size, focus on features and accessibility, use any remaining time to optimize for adequate responsiveness, and then release to the market. NO impact if there is 100ms of 500ms of loading the project.

- For servers: This is when performance optimization becomes more important, as it allows users to consume fewer resources and reduce their total cost of ownership. However, for the first version, it's still crucial to focus on features and delivery to test the market. There IS impact due to the multiple instances of your software running, causing direct impact on TCO.

Expand full comment

We do not actually know why they did what they did. My statement in the article that it may have been reasonable to ignore it was simply to acknowledge it would not be inconsistent with the evidence to assume that you could ignore performance for one version, because successful companies have done that. It is also equally possible that it was not intentional, and in fact being more sensible about performance up front would actually have been better because they did not suffer the large impacts of rewrites later on. There are certainly companies with that track record as well.

Regardless, none of that means we can not do substantially better. How things have been done previous is not some kind of maximum bound. To believe that we can not, in the future, learn to ship software quickly that has better performance than what we have in typical V1's today is both defeatist and suggesting there will be no significant advances on programming tools or methods.

Separately, I completely disagree with your framing. Criticism of deprioritizing performance is well warranted, because even in your examples, you are assuming it is being properly prioritized (making sure your game or calculator or whatnot meets the required FPS *is* prioritizing performance). That current software is almost ubiquitously extremely slow, extremely large, and very slow to add features strongly suggests that your perspective does not agree with reality here. Few programmers actually are checking to see if their calculator runs even as fast as a calculator from 1982, and in fact, many don't, despite taking tremendously more system resources.

I will end by pointing out that Window's own terminal was too slow to use *in this course* for actually running the simulator, and becomes the slow path if you do not pipe the simulator output to a file. The notion that this doesn't deserve ample criticism is absurd.

- Casey

Expand full comment

(I say the following for only frontend development. I don't have any experience with other things so I don't know).

This may be my own insecurities speaking but I think the reasoning along the of "the time of a developer is more expensive than the time of a compiler/optimizer" will never end. And I think that is because what you do is hard to do. And not just in the sense of someone being lazy and thus putting it off. Hard in the sense that I think one needs to actually become smarter to do that. I don't wish to insinuate that other programmers are not smart. I'm only saying that I can see the huge chasm between your ability and my own. I don't know if it can ever be bridged, nor if programming will even survive long enough as a profession for me to try and then either give up or succeed. But I'm trying till the singularity arrives.

If I was able to program at the same level as I have seen you doing in handmade hero, I think I would have made the things that I have made in the past much faster. So much so that the difference would mean a small fortune for the company.

I think the whole reasoning of reaching the market fast is fine for Product Managers and Founders. But I recognize that as a programmer, if I had the same kind of programming kung-fu like you I would still go behind their backs to make it fast anyway. In fact, I don't even care if I'm making the program faster than necessary. If I could do that, I would do that just for fun and to show off how fast I can make it go lol.

Expand full comment

For most of users, who pays for Windows, there is absolutely no need for a fast terminal or calculator. There are a few cases and Casey-es who might want something faster. However, that represents another target audience that will likely never pay for a faster terminal, so why should I be concerned?

>> making sure your game or calculator or whatnot meets the required FPS *is* prioritizing performance.

Let's name it "eventual performance", since it is not being chased from the day 1. You built calculator, and if it goes with 25fps instead of 90 it doesn't matter. If it eventually goes with 1 fps - well, we have a reason to go back to code and check.

The discussion leads to the Russian aphorism "It's better to be rich and healthy than poor and sick.", representing the obviosity of your "Performant apps are better". But when you are limited in time, people, market-demand-window, it's solely a matter of impact: sometimes it's better to have a slow Photoshop available yesterday than to wait for a revamped, faster refPhotoshop. Sometimes it is better to take time and make safer and better car and not to go to the market with a trash-on-wheels :)

I'm more inclined to pay for functionality rather than a terminal that is "just fast."

I would rather pay for 10 apps that will solve my tasks and be ready within a week with a team of mid-level developers. Let it be 5s data retrieval from database. Let it be 12s IDE loading project. Let it be 15s PowerPoint openning my presentation. But I will have them "now".

It is better than to pay for a single app developed by a small group of expensive highly educated and performance-experienced engineers who, due to their limited resources, wouldn't be able to deliver all 10 on time.

Such engineers better be doing stuff like libraries, drivers, frameworks, network cards firmware, server based apps - that is where impact is. Otherwise those engineers become low-performance developers themselves, spending their highly valuable time and knowledge on apps, where users don't value performance.

Expand full comment

I don't understand what you are arguing here. It appears as if you are saying, "software is slow because the developers are not good." What does that have to do with anything? Suppose you are right. Doesn't that mean we should be working on training them better?

If you are defending these excuses as being valid on the basis that "companies make slow software because they cannot hire better developers", that is a truly bizarre angle. The point of arguing against the excuses is to eliminate one of the ways in which people avoid improving their coding standards.

The entire point of shifting the culture of programming toward performance is so that the developers will get better at it. Is it your contention that they simply cannot? They cannot learn? Not only are they bad at it, but they will forever be bad at it? The idea that a calculator app could run faster than 10fps is just so far beyond the conceptual realm of what they can do within a reasonable timeframe that we might as well give up and just accept defeat? You really believe we can never get the industry to a place where, on a normal schedule, the calculator app made by the average developer doesn't run dog-slow despite running on the equivalent of several 1990's supercomputers?

Since it's a baseline today that most software developers do not know about performance, it doesn't seem like an unreasonable assumption to think that, if they were better trained, they would make faster software by default. In my experience, it does not take even a single day longer to make a "60fps calculator" than it does to make a "10fps calculator", because even a completely unoptimized - but sensibly written - calculator app runs at hundreds of FPS with no optimization. The difference is just whether the developer knows how to avoid basic performance mistakes.

- Casey

Expand full comment

> It appears as if you are saying, "software is slow because the developers are not good."

I will say, yes. I'm one of them.

> If you are defending these excuses as being valid on the basis that "companies make slow software because they cannot hire better developers", that is a truly bizarre angle.

I would again like to put forward something possibly controversial here. I think the problem isn't simply that companies cannot hire good programmers, but that their whole incentive structure doesn't even have a place for someone who brings being performance-aware to the table. Now I have never worked for a big company with seemingly infinite resources like google or facebook or amazon so it could just be that those companies can and do hire people like that. But the places I have been at till now are basically organizationally blind to the existence of the "performance awareness" axis in a programmer. Usually you get rated on how many features you shipped. I wouldn't say that if you didn't ship something but instead made something that was already shipped perform better it won't look good in your performance review, but it certainly won't look any better than someone who shipped a bunch of new features all of which were slow. Shipping features is what counts really. Maybe the big companies are different.

Expand full comment

It is absolutely ok to have 10fps calculator :)

It is absolutely ok to use a clean-code/git-branching/junior-developers etc for feature-based non-impactful apps like calculator, terminal, powerpoint, photoshop.

it is not ok to use highly-educated engineer on same tasks instead of making them do libraries, frameworks, drivers, firmwares, server-apps, training courses, etc.

And if one of such professionals does 900fps calculator, 1000fps terminal instead of 400Gbit eBPF filter or 4KB wifi router driver, or a better programming language, or a better core library, they are wasting their performance. Actually they don't chase performance, they chase EGO as sense of self-importance or self-esteem, being non-impactful at all, and I don't think it is ok.

And from business and customers' perspective: if server app is performant and has lower TCO, it is must have. If terminal, calculator or photoshop starts to run 15% faster than usual, it is just not-neccessary bonus, market doesn't need it, it needs functions.

So, yes, I defend those excuses in segments of apps where it is ok to be slow, ok to have weekly crashes, ok to use 1GB of RAM instead of 128MB, IF such behavior is a result of delivering the app faster to cover demand now, making higher impact due to earlier adoption.

Those excuses doesn't work in segments of apps where performance is a direct value. I've seen a TV-remote with two web-servers and java on it. I don't care, it works and does its job. Maybe there is a junior-developer who came up with such bizzare idea, and I'm happy with that - it means that highly-educated developer was busy with something else, possibly more impactful )

Since excuses are ok for ordinary software and don't work in impactful software, it means that both excuses and excuses debunktion have no sense at all.

Companies in your email are in the TOP revenue generating companies, almost achieved monopoly in their segment. It is just stupid to imagine what would be if they took time to build performant app instead of what they did.

But it is for sure that we don't see another "facebook" which was build with performance-as-top-priority. If there was one, it died, didn't pass the evolution filter. It means a lot.

Expand full comment

This is completely false, because we already have seen the exact opposite as well. By your logic, somehow the fact that the Apple iPhone beat WindowsCE proves that only by offering a fast, responsive, 60fps experience can you be the dominant player in an industry. But we know that's false, because other industries have counterexamples.

The same would be true for MapQuest, which was completely destroyed by Google Maps on the basis of Google Maps being real-time while MapQuest was not. Would we conclude that the "only way to be the dominant map vendor is to ship your first version with high performance"? I don't see why we would, because we can look at other web market segments (tax software? collaboration software?) where the performance is awful to this day.

Evolution doesn't tell us much. It usually tells us more about who happened to decide to try something first, not who was the best at doing that thing. Sometimes we see clear examples of someone coming in with a superior product, but in many software segments, there has been very little competition and we simply don't know very much about what works and what doesn't. This is also unsurprising given that we're talking about an industry that's barely 25 years old.

- Casey

Expand full comment

So, is it ok to have 10fps calculator if there is no other calculators on the market?)

Expand full comment

I really don't get the whole part about the importance of "features" outside the realm of REALLY new business models to be honest. Leaving pointless examples like a calculator aside, the vast majority of professionally used software was pretty much "feature-complete" years ago. The real problem is that people have to buy more powerful hardware regularly to MAINTAIN the level of functionality they already had, because the sofware gets stuffed with "new functionality" or is rewritten in a nonsensical way.

In my opinion the whole performance debate is not about this "new idea",

time to market or features on day one. It's about fighting the decline of software which has very clear usability and functionality expectations.

I shouldn't need to buy a new phone to be able to read and search through my emails without (less...) lag and there are no meaningful new features an email-client needs. The majority of photoshop users use maybe 20-30% of the sofwares abilities, yet all have to provide resources for the complete package.

Further, the distinction between good programmers, who should be writing the "impactful" software and "the others" who "also need something to do" is just painful. If administration employee A uses administration-software B on an daily basis, then it's performance and stability is pretty impactful to him. (And to me, if I sit in his office and he accidentally miss-clicked again...)

Just my rambling thoughts...

Expand full comment

> And if one of such professionals does 900fps calculator, 1000fps terminal instead of 400Gbit eBPF filter or 4KB wifi router driver, or a better programming language, or a better core library, they are wasting their performance. Actually they don't chase performance, they chase EGO as sense of self-importance or self-esteem, being non-impactful at all, and I don't think it is ok.

I think thats okay and I want to be that guy. Hehe.

> So, yes, I defend those excuses in segments of apps where it is ok to be slow, ok to have weekly crashes, ok to use 1GB of RAM instead of 128MB, IF such behavior is a result of delivering the app faster to cover demand now, making higher impact due to earlier adoption.

You have not met the customers I have met. ("You have not seen what I have seen" -- Morfydd Clark). If you deliver an app that is not a game or something that obviously needs 1GB of RAM and still uses it, customers will be on your back. You will lose so much business. I have seen it happen. And the excuse that if you don't use the 1GB of RAM you will lose the ones who are willing to tolerate that extraneous usage as well, is not good when the course is promising that it will teach us how to deliver it in time AND make it use less. The loss of time you are imagining is not going to happen if I understand what the course is about. Its not about learning how to deliver more robust applications but at a slower pace. Its about, going at the same pace but making much better things.

Expand full comment

In general, I'd agree with you. My problem is that it doesn't happen that way. At first, you have a small amount of developers that all understand the app and the trade-offs being made, probably thinking of making something work, without any regard for performance. Usually they use it, so if it has performance issues, they will be resolved as needed.

Then, the company has a user base and it grows. Each team has assigned an smaller section of the software and is MEASURED by the features they deploy. Now, the app is big and you as a developer don't want to be responsible of breaking things, and since you don't really understand all of it, you just add your bit to the pile.

You mentioned that optimization becomes important for servers. I assume you mean backend development in general, if that's the case, it couldn't be farther from the truth. In reality, at least in ALL the companies I've worked for and know about, they care way less about performance, since they can just add one more server.

The focus there is scalability, having the ability to add one more server, this results in a culture where you put Kafka everywhere in between all of what you have, because it "scales". Becoming an endless chain of network calls that are unnecessary. All to handle some amount of customers because the initial solution can no longer handle it, so they start adding machines. Doing it better usually is not the first approach, but to create new "services" and add more servers.

Even there's a funny video about this, https://www.youtube.com/watch?v=y8OnoxKotPQ

Which is funny, true, and sad at the same time.

With this course, Casey brings forward the options to make something better, and probably we need a whole bunch more users (like a TON more), before the solution truly is to add more servers. For your point of tackling on performance only when it matters, I agree, in practice a LOT of things are being done that don't add value at all and are really slow. For two reasons, we don't measure that performance, and it's the standard to do it that way, so it is a safe bet, you won't loose your job for deciding to create an architecture this way.

Expand full comment
Comment deleted
Apr 27, 2023
Comment deleted
Expand full comment

>> If the first version of your software runs on the servers slower than it should and so costs more to your customer in terms of cpu usage, will you refund them for the money they wasted when you will deploy the faster version?

It is all about impact, ROI and TCO. If customer paid for my slow software 1$ and another 4$ for a excessive hardware needed for my slow app, but my software made him 10$ (5$ profit), I'm golden. I could spend another 4 months making better version and keep market-demand waiting for a faster version, but it will lead for at least 4*5$ potential profit loss for such customers.

*Added:* today it is simpler and faster to just connect OpenAI's API, analyze email texts with few hours of junior-dev work, and have result by the end of the day, than to find "Casey"-level-engineer and pay for a highly-performant analyzer, which would be a future reference of best app possible... but in a month )

Expand full comment
Comment deleted
Apr 27, 2023
Comment deleted
Expand full comment

Let's compare:

Option 1:

I develop first version using what I have on the table: average frameworks, average code-style, no performance profilers. I build first version fast on bases I got used to, not really thinking of SoA vs AoS, moving and multiple-times re-parsing stringified JSON objects etc. I write code as it goes in my head, without revisiting any previos parts.

The app is ready today. It costs 1$ + 4$ performance penalty per month.

It makes customer 10$ per month, so it is 5$ profit from today.

Then I release second version of software, refactoring frameworks and optimizing top 50% slow functions after connecting to performance profilter.

Due to the fact that I have to substitute framework with my own code, re-think data storage and minimize data movements in memory, maybe even move from C# to jai. It takes 4 months.

The second version have significantly less performance penalty: 1$ instead of 4$.

Customer's profit rockets from 5$/mo to 8$/mo.

At the end of the year, customer have:

5$*4 mo + 8$*8 mo = 84$

Option 2: I do optimized version from day 1. Release in 4 motnths.

At the end of the year, customer have:

0$*4 mo (there was no app) + 8$*8 mo = 64$

... and he pays me 4$ less

The negative impact of prioritizing performance 25% in real money.

Expand full comment

Why do you keep assuming that Casey Muratori will ship the software slower. I think he would ship it in the same amount of time as the junior dev but it will be much much faster. I know looking at how he thinks of SO. MANY. THINGS. as he goes makes it feel like he is doing so much more, and so he must need more time. But he also types at a furious wpm. So maybe it won't even be late. It might cost the company more to hire him but I mean... who cares they have money to burn.

So, if there were a lot more programmers like him to hire then a lot of software will get faster.

Just picture it... a room full of Casey Muratori's... making a calculator. There is a vending machine, which only dispenses almond milk. Jamba Juice is banned on the premises.

Expand full comment
Comment deleted
Apr 27, 2023
Comment deleted
Expand full comment

This is an unfortunately underappreciated aspect. If people cared more about performance, the languages and libraries would be better, and so it would be easier for everyone to make faster software.

- Casey

Expand full comment

Casey, this message is sorely needed in the programming world. But apparently the rhetoric has surpassed even the realm of "enterprise software" and into video games too. Look at this reply I got 4 hours ago in the comments section of a video going over your debate with Uncle Bob:

"You act as if performance bottlenecks are 99% poorly optimized code. If you’re making some game with realistic graphics for example, the performance hoard will always be the rendering no matter how optimized it is. So if my character movement is 20x slower than it could hypothetically be why would I care? I could heavily optimize all off the gameplay and the resulting performance of the game would barely change. It would be much nicer then to make it as readable and modular as possible"

You'd think that if there were any industry where performance aware programming mattered, it would be the video game industry. But look- the rhetoric has spread enough to where this guy actually uses a video game as an example. Assuming that by "character movement being 20x slower than it could" he if referring to the "performance of the section of the codebase dictating character movement", and not actual character speed through the world (which would have a huge impact on rendering performance), this still seems like a very bad example.

Can you go over some concrete reasons why this rhetoric especially doesn't hold true for video games in particular, over 'enterprise software' like facebook etc.

Expand full comment

Absolutely agree with "if my character movement is 20x slower than it could hypothetically be why would I care? I could heavily optimize all off the gameplay and the resulting performance of the game would barely change. It would be much nicer then to make it as readable and modular as possible".

Game is the thing that runs fullscreen, the only focused application. If it meets fps SLA needed for a good gameplay, it is enough.

Expand full comment

You agree because, much like the OP, you don't understand the difference between "heavily optimizing" and "not writing atrociously slow code because I don't know any better."

Regardless, the OP cited a horrible example because game logic that revolves around the main character and his movement physics actually plays a substantial role in the rendering cycle, and pretty much every major process in a video game. Also, games have been getting horrendously slow and bloated the past 20 years due to this exact kind of dismissive behavior.

Expand full comment

I concur with the additional remark that "If it meets fps SLA needed for a good gameplay, it is enough." If is significantly affects the gaming experience, the amount of "happiness hormones" produced, and so on, then it is worth optimizing and investing time in learning how to write faster code. If not - it is ok even to have executable size of 500 MB instead of 30.

When I write code for web applications, I prioritize time-to-market while maintaining a satisfactory UX based on an "SLA" with the possibility of optimizing later if needed. When I write code for servers, performance ranks in the top 3 priorities. This is purely a matter of impact.

Expand full comment

I've found a lot of other excuses coming from people who just flat out don't believe they'll ever need to be concerned with performance because their software itself is solving a niche problem, so they're actually just waiting until a better version of their solution is made by someone else.

My father in law works in flooring and the software that he uses to create invoices after he's created a floor plan that pulls in QOH at the store is essentially one of three solutions that exist. His survives literally just by supporting an arcane file format only used by one company, but even then adoption is slowing vs other solutions as these same corporations are trying to migrate AWAY from legacy. Combined with an old windows UI, the slow response time all but guarantees its eventually demise.

Expand full comment

Well, I can't really argue with that. "I don't want to learn about performance because I don't care about my product, and if someone wants to make a faster one, I'm happy to cede the market to them." It's a totally valid position! Not very admirable, but clearly logical.

- Casey

Expand full comment

How often does the risk of being overtaken solely by performance occur after all?

Do the majority of paying users actively seek faster calculators, terminals, Apex Legends, Call of Duty, Photoshop, and so on, or is the user experience perception based not solely on software speed but rather on a combination of factors such as speed, functionality, accessibility, support, price, learning curve, and available plugins?

Expand full comment

I'm not sure the primary motivator for companies is being overtaken. Rather, it is their data that shows their customers engaging more with ads or participating more in in-app economies. I don't think it is a fear of being outcompeted (at least usually), but rather the loss of revenue per user that comes from them being less engaged and less active on the platform. This is what these companies refer to when they talk about user research that shows performance matters to their bottom line.

- Casey

Expand full comment

Does this truly imply that a user would switch from Photoshop to something like GIMP just because it's faster? Adobe's revenue doesn't seem to support this idea, even though it is a relatively slow application and there are numerous faster and even free alternatives available for various target audiences and portions of Photoshop's functionality.

Isn't it possible that this is merely your perception, formed due to cognitive bias based on the knowledge that "it can be made faster"? This could create an additional layer of illusion or a thought experiment leading to the belief that "performance matters to their bottom line," which might be false in most situations, particularly when discussing desktop, web, or mobile applications.

This is by no means a personal attack, but rather a desire to understand the real sources of the difference between my experience with publicly available data and your information, where competition can be won simply by having a faster application.

Expand full comment

Are you reading my replies? I literally just said I don't think it has anything to do with competition. The data from Facebook, Microsoft, and Google that I referenced in the article was that users spend more money, or see more ads, etc., if the application performance improves. This is not my "perception". This is the data they've collected and published, many times, in many places.

If you'd like to know what Adobe thinks about this issue, you could, you know, *read their blog where they tell you*. Here is a quote (from https://blog.developer.adobe.com/boosting-website-performance-with-adobe-experience-platform-web-sdk-and-edge-network-329fcf70fdf9):

"Performance equals money. In 2018, Google published a report which showed:

As page load time goes from one second to three seconds, the probability of bounce increases 32%.

As page load time goes from one second to five seconds, the probability of bounce increases 90%.

As page load time goes from one second to ten seconds, the probability of bounce increases 123%.

Here at Adobe, we’ve been working hard to contribute to your success in this area and have some great news to share: we recently released Adobe Experience Platform Web SDK and Adobe Experience Platform Edge Network, both part of Adobe Experience Platform Data Collection Service. Platform Web SDK is a consolidated JavaScript library built from the ground up for implementing Adobe’s experience products on your website. It supersedes visitor.js, appmeasurement.js, at.js, and dil.js, which I will refer to in this blog post as “legacy libraries”. Platform Edge Network consists of globally distributed servers with which Platform Web SDK communicates. Working in tandem, Platform Web SDK and Platform Edge Network greatly increase marketing implementation performance over previous Adobe technologies."

Adobe, just like everyone else, rewrote their stuff for performance. And the reason is the same: it *is not* because they are afraid that someone else is going to steal their market share, and it's not because their customers think someone else is going to steal their market share. It is because they know that for them and for their customers, performance equals more revenue. This has been ubiquitous knowledge everywhere for the past decade, except apparently comment threads on the internet like, unfortunately, this one.

- Casey

Expand full comment
Comment removed
Apr 27, 2023
Comment removed
Expand full comment

You have same "if"s as Casey :)

That is pure cognitive bias and thought experiment.

Yes, between bad world and better world people will choose better one.

Yes, it is better to be rich and healthy than poor and sick.

But in reality it better to have slow photoshop yesterday and multiply efforts of graphical designers around the world yesterday, that to wait for a better photoshop for years continuing to draw advertisements in MS Paint :)

Expand full comment

Ooh I like this end: "if we do a little bit of work to preserve those performance options things get a lot easier"

This seems to hint at a way of doing development which allows you to prototype and do quick and dirty V1 releases, while at the same time setting yourself up for enhancing performance down the road without a rewrite.

These days I write my personal projects in C, and it can take some time to get things off the ground, would love to know about a way to improve my prototyping time while not giving up on having something that I can optimize later and isn't locked into some other unoptimizable framework!

Expand full comment

But Facebook, Uber, and Netflix aren’t real eNtErPrIsE SoFtWaRe

Expand full comment

I’m just waiting for people to come and say most companies are not like Uber, Facebook, etc. Lol

Expand full comment

"We would like to focus on the kinds of programming that is done at *un*successful companies..."

- Casey

Expand full comment