136 Comments

Thanks for making this public - I fight these battles at work all the time, and it's nice to have a concise POV from someone that corroborates what I can show thru experimentation.

The pushback I imagine getting back from this is about maintainability, "Developer Experience" issues, etc, which I don't have the experience to refute. But at least I can measure improvement - proponents of those other issues kinda can't

Expand full comment

There is some kind of a weird thing where people think good code is not maintainable, but bad code is. It's like "if we make this code bad to begin with, then when we extend it with bad code, it will not get worse", which is true, I suppose :)

- Casey

Expand full comment

Yea, it's just hard to argue with people "who have seen some shit". As in, I'm 10 years into a programming career with some pretty cool stuff to show for it, but this argument is always the showstopper. On the other hand, it makes it very exciting to see things like this start to pop up.

Also makes me feel better about the looming threat of AI generated code - an AI that was trained on decades of "Clean Code". lol

Expand full comment

"Also makes me feel better about the looming threat of AI generated code - an AI that was trained on decades of "Clean Code". lol"

Yes, exactly! Thank you for mentioning it! I feel so alone when I try to talk about this with others...

Expand full comment

Based on my own use of ChatGPT, this is true except for when it is busy hallucinating invalid syntax which is at least 90% of the time for anything slightly nuanced/lacking tons of training examples like a hacker rank solution.

Expand full comment

Sir, the way you played down your experience in your original comment is a bit misleading. 10 years! Comon! You have got to have seen some shit!

If I had 10 years Ill fight with everyone

And yes, copilot is .... horrible

Expand full comment

I think that when you add code, you multiply the quality (say code quality can be between zero and one). So unless the code was a flat zero to begin with, it gets a lot worse even if it was terrible to begin with.

Expand full comment

Casey is really nailing his theses to the door in this video.

Expand full comment

Fantastic! It's really a relief to see someone make this case methodically and non-ideologically. (Your point about that switch statement is right on - sometimes just by laying out stuff in a sequential way makes it obvious where stuff can be cut down.)

i've sat in on conference talks where the guy up at the front took literally 5 lines of code in a single file and blew it up into like 50 lines across 10 files, and his justification for it was, verbatim, 'we want the next programmers who see our code to think we are smart'. It all felt very wrong, and yet i assumed he must know something i didn't - after all, he's presenting and i'm in the audience.

So, i dutifully worked for several months rebuilding a project following these clean code principles, and just found it created a lot of new spaghetti, duplication, and piles of boilerplate where everything had once been so direct and readable. Polymorphism especially - it *seems* like such a powerful idea, but ends up making the flow of information significantly harder to follow. Ended up abandoning the rebuild and simply refactoring my 'unclean' code in a last-minute marathon. Made me feel like a bad programmer, like i'm hiding some secret shame. There's such weird peer pressure around it now, seems to mount a little each year - with some people, it's like the code equivalent of saying you're a Trump voter or something.

Anyway, i'm subscribed now, gonna go back through the previous ones and watch through. i appreciate your pro presentation skills and clarity.

Expand full comment

Probably this goes outside the scope of the course (maybe better suited for an architecture course) but talking about "SOLID" principles and their downsides as you just did with "Clean" code could be great.

Expand full comment

There are a series of blog posts by David Bryant Copeland about SOLID is not solid as well as a book:

https://naildrivin5.com/blog/2019/11/11/solid-is-not-solid-rexamining-the-single-responsibility-principle.html

https://solid-is-not-solid.com/

Expand full comment

Have you already seen his video "Where Does Bad Code Come From?" on his Molly Rocket YouTube channel? He talked about SOLID there, but less from a performance and more of an architecture perspective.

Expand full comment

Nice! I just saw it.

The analogy that he made around clean code by applying SOLID principles being similar to blessing the food before eating remind me the "Introduction to git" on handmade hero and its "git bless" command :P

Expand full comment

can you tell me which episode that is? I havent seen it.

Expand full comment

The tittle of the episode is: "Handmade Hero Day 523 - Introduction to Git"

Expand full comment

Holy shit this is gold. I had no idea Casey made this kinda video. Hilarious

Expand full comment

I was confused at first and sad he was advocate for git. But after 5-10 minutes I got it. I'm certain if I show it to one of my colleague at my company, they will just don't notice it.

Expand full comment

thanks

Expand full comment

It's nice to hear confirmation of programming practices that I've also found problematic.

For example, I have a note in our company handbook to disregard linters' warnings about "cyclomatic complexity", because resolving the warning typically involves extracting a "small" function that does "one thing" - but after this extraction the code is harder to read, harder to refactor (like when the area-calculation logic is in multiple files in the video's "clean" code), etc.

We outsource a lot, and my code reviews after handoff involve undoing a lot of these practices. This typically results in less code that is easier to read and maintain. I'm not sure what these practices are providing, except maybe to prevent inexperienced programmers from writing something really incomprehensible and buggy (but they should be given more coaching and supervision in that case, instead of given "clean code" principles to follow).

Expand full comment

I totally agree that breaking things into functions just to make the functions small without actually understanding what the code is supposed to do is a very bad practice. I remember once doing something similar that led to a bug in the program I was writing. Debugging the problem took me a while until I found out that, while trying to prematurely break things into different functions, I ended up splitting some code that needed to be in the same function for things to work properly.

I also don't find that practice really "clean". Over-fragmentation of the code into smaller and smaller functions for the sake of it tends to me it a lot harder for me to read the code, because I constantly have to break my flow to understand what the code is trying to do.

Expand full comment

Always found it amusing that using dynamic dispatch, and being in the dark about how things work is 'clean.' :)

Great stuff, thank you.

Expand full comment

One thing that gets me is the staggering selfishness of the "developer experience" people. Not only are their pronouncements usually totally untested, not only do they constantly cite some Big Other (who would certainly know if these practices didn't work!), but the one that hurts me the most is "my time is worth more than the computer's time" or flippant remarks about engineering salaries. Even if "clean code" does make development faster (unproven!), to think that a couple hundred bucks of engineering salary is worth actual human lifetimes of loading screens (which are easy to reach at even modest user counts) is just beyond the pale to me.

Expand full comment

I feel like you want to do a running slap

Expand full comment

Playing the devil's advocate here

I don't think clean code was ever about performance, so was this comparison unfair? I'm not saying the performance comparison shouldn't be made. But maybe the appropriate comparison would be by readability and maintainability. To "beat" clean code, one should show performance-aware programming is more readable and maintainable, right? Even in large code bases. Give a slightly more complicated performance-aware program to some programmers. How fast and well can they understand and add features to given program? Do you have these measurements?

Of course, the more the programmers are trained on clean code, the harder it will be for them, I'm aware. You're probably suggesting a whole revision of the programming education "ecosystem". Starting from "data transformations" instead of "objects and what they do" in the early stages when teaching programming. The latter seems so natural and intuitive (or maybe I just forgot the struggles when I learned OOP). That's quite a effort.

On another note: another way clean code is practiced is by following "don't write code for the computer, write code for other programmers". It's almost saying "programmers don't know or don't care about what the computer is actually doing". That seems to be the general mindset which causes the 1000x slower software.

Expand full comment

The idea of creating a video around "code that is performant and easier to understand and add features" would definitely be a great video, and I would definitely watch it, but given the name of this course, I imagine it would probably be outside the scope to be included here.

I do think that Casey could succeed at creating such a video, but with this video he makes a simpler point:

Even if an engineer thought that this "clean code" method created more understandable, easy to edit code (Which again, is a big claim worth testing), could you justify extinguishing 15+ years of hard won hardware performance gains to get that benefit?

Because that is essentially the choice that is made every day, but it's made without any attempts to justify it.

Expand full comment

You may want to check out a video by Shawn Mcgrath. Its not a study with metrics but he shows one case where the clean code is just impossible to understand. He is also eating samosas and drinking scotch... its an absolute gem of a rant. I copied the video like at the timestamp where it starts: https://youtu.be/q4nUK0EBzmI?t=11512

There is also another one here showing comparison of how these principles just produce code that is just not good for reading: https://www.youtube.com/watch?v=IRTfhkiAqPw

Expand full comment

I agree with Seth's reply to this but also feel like adding that I am not sure anyone can really claim to 'have these measurements' when it comes to concepts like readability and maintainability: They're nebulous to pin down, and it's hard to design (and afford) an experiment that captures the kind of scale that you probably actually care about -- i.e., not the effectiveness of a novice on day 1 of the job, but some longer-term success. (And is there even a real definition of whether a code-base has 'true' clean code, given that these are often just subjective rules of thumb?) I wouldn't think of this kind of software engineering self-help advice as if it were a science.

Expand full comment

( I see another comment pointed to Casey's "Where Does Bad Code Come From?" talk, which also comments on the problem of measuring these things: He is more optimistic that we could measure them, but points out that the clean code advocates have not even really tried to do so: https://youtu.be/7YpFGkG-u1w?t=1531 )

Expand full comment

I agree with this. I feel like no one can honestly argue that "clean code" presented here is performant. It's objectively slower as Casey shows here.

I think what "clean code" is maybe attempting to do is to provide guidelines for how to manage software complexity, but it fails to do it because it's applied blindly at too low a level. When you start drawing boundaries everywhere you actually make it more complex when you zoom out.

The actual problem of managing software complexity, where and how to define your boundary lines, breaking up your system into smaller pieces that are easier to understand, etc. seems to me too hard to solve with just some simple ruleset like "clean code".

The programmers I've met who are good at this seem to have gotten there all by writing and reading lots and lots of code, for lots and lots of different kinds of problems. The sheer volume of work they've gone through gives them some sort of instinct for making good architectural decisions. And even then they still make some wrong decisions the first time solving a new problem!

Expand full comment

Great talk, thank you. Just a little disappointed to not have any information on how far away the sun is in this lecture, seems like a very important point in a programming lesson :).

Expand full comment

This comes at the right time, because i have actualy a use-case that exactly shows that writing best-practices software hurts performance and even memory a lot!

We have hundreds of signals containing 10k up to 500k samples and i always need the minimum and maximum value of the actual data. But there is a caveat, not each sample are valid, because of this there is an array with the same length of data that stores a flag that indicates if the sample is valid or not. It is not sorted, so we always have to check the index for every value.

The team writing the library that computes the min/max values used a thread-based for-loop that computes the min/max for each valid value. Also it is guarded behind at least 4 levels of indirection before the math is actually executed. It is painfully slow and it uses a ton of heap allocated memory. Those codes where written using solid and clean-code principles -.-

After analyzing the code, i have written a much faster solution using only vector instructions (SIMD) and stack allocated SIMD vectors, that does the same thing and use conditional selects to only compute valid values. It is at least 10x times faster than the other code. After unrolling the loop 4x times, its twice at fast now.

I didn´t changed the underlying crappy data structure, because it was written by another company.

Also the language is pure C# / .NET 6.0, so i can´t write it in C unfortunatly :-(

Expand full comment

I think one thing the lecture should have shed some more light on, or even dedicated an entire video about is the fact that these "clean" code rules actually and fundamentally goes in the opposite direction of what the CPU want you to do to run more efficiently. Virtual functions, array of pointers to objects, separation of concerns, each object should know how to do the work on his own in isolation instead of doing the work on multiple objects together, etc. all these ideas fundamentally go against the features that the CPU has to improve performance. These ideas will confuse out-of-order execution, pollute the cache following pointers, and make it difficult to load data into SIMD registers.

Because of that, the more someone apply these ideas the worse it is for the CPU today or even in the future.

Expand full comment

I agree with that, yes. Older programming styles tend to be more similar to what CPUs do, newer styles seem completely divorced from any idea that a CPU is involved.

- Casey

Expand full comment

Thanks for making this video! I do notice one thing that we're keeping the same memory size for all the shapes by using a union. This avoids the pointer indirection, but also wastes memory. In this case it probably doesn't matter much. But what if we have polymorhpic types of objects that have drastically different memory size. Isn't it a waste to use the maximum amount of memory to hold all types of objects?

Expand full comment

This is very rare, but when you have this case there is a very simple solution: keep more than one array. This is also the solution when the processing differs too dramatically.

In fact, you would often prefer to keep each type in a separate list, as it is both more space efficient and more efficient to process. But most programming languages make it onerous to write code that way, so it tends to be done less frequently than one might wish.

- Casey

Expand full comment

I’m loving the idea of being able to escape the performance shackles of writing clean ruby code for CPU bound cases, and looking forward to finding out what I can do from the rest of the course… as well as hopefully learning how to do higher-performance graphics programming.

One question though… mightn’t Amdahl’s Law explain the proliferation of clean code practices in bespoke enterprise software and “web applications”? That type of code tends to be very storage-bound, with multiple hits to databases that are an order of magnitude larger than RAM, and have significant network latency on both request and response. I’ve had the experience of rewriting a long-running Ruby ETL job in Rust, only to find that IO overhead meant that I saw no speed improvement unless the working set fit comfortably in RAM.

Under these conditions, it seems like the relative performance costs of “clean code” become negligible?

That said, I think a lot of that is changing now. I’ve had plenty of “storage bound” workloads in the past that would easily fit in the 1TiB+ RAM of modern servers, and a 24x PCIe4.0x4 NVMe array seems like it would probably remove the bottleneck altogether?

Expand full comment

I would like to object to the quotes around web applications. They are applications and they run on the web. Why the quotes?

Also Id like to add to this question. I have seen one such IO bound case in a postgres database where 30 40k reads get the cpu utilization of that server running the rds instance upto 95% and then kills it. Now that is strange, that db is supposed to be able to do millions of rows per second. So why cant the backend people optimize it then? If the course can have an episode looking at sql databases where we have just about no control over the program thats interacting with the cpu and yet we must somehow optimize the queries to use the cache effectively, I would be very grateful. I want to know how to take just any program, a dbserver or something else and figure out if its performing badly, then tweak whatever handles i do have on it and be able to tell if im making it better or worse.

Expand full comment

No derogatory intent with the quotes, it was intended as a grammatical device to indicate grouping a number of disparate things into a loose category, given the range of things that use HTML/HTTP UI these days.

There is no 1-to-1 relationship between number of DB requests and the CPU and memory usage of the server, since different database tables and query plans will vary by orders of magnitude in the CPU and memory needed for a single query, so as you have discovered, you can’t really say that a particular database is rated for a certain number of transactions per second - and that’s even before you consider how widely database hardware varies.

Backend devs absolutely should optimise the database queries HARD, and often fail to do so. Postgres tuning and query optimisation is its own dark art and there are a lot of in-depth resources for it available, not least of which is the Postgres online documentation itself.

I’ve had some great results with optimising slow Postgres queries in my own code, one I remember in particular was a multi-millisecond set of queries that I moved into a stored procedure, that then ran in 10 microseconds. And that was just with basic indexing, CTEs, and PL-PgSQL, I’m no expert on Postgres query plans...

Expand full comment

Yes I think most of the time the cost to companies comes not from the python servers but the rds and other db's they are running. I am not a backend developer, but I would like to know if it would even be possible to look at a database server like we are looking at python programs here.

Like you said the postgres docs will have the information for optimizing queries. I want to know if we can look at the database programs as just another program that runs on the CPU and apply the same kind of logic that we are applying to python programs.

Expand full comment

It’s way outside of the scope of this course.

But if they haven’t even looked yet, they should start in this order:

0) identify slow queries

https://www.cybertec-postgresql.com/en/3-ways-to-detect-slow-queries-in-postgresql/

1) check for n+1 in the python code:

https://adamj.eu/tech/2020/09/01/django-and-the-n-plus-one-queries-problem/

2) run EXPLAIN ANALYSE and look for any table scans, and add indexes as needed

https://www.postgresqltutorial.com/postgresql-tutorial/postgresql-explain/

3) check the database has been vacuumed and analysed, particularly after bulk-loading data

https://confluence.atlassian.com/kb/optimize-and-improve-postgresql-performance-with-vacuum-analyze-and-reindex-885239781.html

4) consider switching to prepared statements for any frequent small queries

5) consider moving data-intensive python logic off your application server and into postgres stored PL/Python procedures to minimise network latency and number of requests - counterintuitively, moving MORE work onto the db server *might* reduce the overall load on it by eliminating networking and session overhead

https://www.crunchydata.com/blog/getting-started-with-postgres-functions-in-plpython

Expand full comment

Oh thanks for this list!

Expand full comment

In more abstract terms,

Yes: you can and should look at your database in a performance-aware way, and should understand how queries translate to the underlying hardware, and as Casey said, “perform fewer total operations and perform more operations per cycle”, so the needed mindset is the same.

No: the specific CPU related things we are learning in this course are less relevant to IO bound databases, and there are many more IO related things that are more important and that this course won’t cover. This means that while the mindset is the same, the detail knowledge of how-to is different

Expand full comment

I think that section alone has the potential to change the industry. Most companies are "data-driven" these days, except for their programming practices. You've just done a data-driven presentation on why the industry's practices are harmful to the software being built. Obviously, a mountain of excuses would then follow, but it's a great seed for great change.

Expand full comment

Thinking on this more, the one thing that bugs me is how circles fit into that superset struct. That seems like a genuine harm to readability and usability.

If you wrote this somewhere in some large program to process shapes, and I needed, say, a circle, I would see that there's an enum for circle, but then I'd need a "width" and a "height" and circles don't have those - and I'd also need to know to make them both the same. This really DOES seem like one of those cases where it hurt readability and opened up a pretty clear path for bugs (e.g., thinking "width" and "height" are the diameter of a circle, not knowing you need to put identical values for both for circles). Is there some other piece I'm missing? Would you, in practice, abstract over this with constructors for the different structs that have more conventional names?

Expand full comment

It depends. In this particular case I would just change the name to ellipse, and support them. But in a real world case I might do something fancier like a real discrimnated union, or separate arrays, or any of a variety od other techniques.

- Casey

Expand full comment

I've seen people do a discriminated union thing like this, giving a name to each thing it can be (substack killed the formatting, not sure how to fix).

struct Shape {

ShapeType type;

union {

struct {

float radius;

} circle;

struct {

float width;

float height;

} rectangle;

struct {

float base;

float height;

} triangle;

};

};

Expand full comment

Ah okay, you're talking about the "Uncle Bob" Clean Code book. I see it now, "G23: Prefer Polymorphism to If/Else or Switch/Case". I tend to disagree with Bob's advice on other topics online so I never bought his Clean Code book. In any case, I think you've proven pretty easily that it's not advice that anyone should follow.

Expand full comment