As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.
One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.
Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.
There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.
What are your opinions on this?


So we should not have #defines in the way, right?
Like INT32, instead of “int”. I mean if you don’t know the size you probably won’t do network protocols or reading binary stuff anyways.
uint64_t is good IMO, a bit long (why the _t?) maybe, but it’s not one of the atrocities I’m talking about where every project had its own defines.
“int” can be different widths on different platforms. If all the compilers you must compile with have standard definitions for specific widths then great use em. That hasn’t always been the case, in which case you must roll your own. I’m sure some projects did it where it was unneeded, but when you have to do it you have to do it
So show me two compatible systems where the int has different sizes.
This is folklore IMO, or incompatible anyways.
RPython, the toolchain which is used to build JIT compilers like PyPy, supports Windows and non-Windows interpretations of standard Python
int. This leads to an entire module’s worth of specialized arithmetic. In RPython, the usual approach to handling the size of ints is to immediately stop worrying about it and let the compiler tell you if you got it wrong; an int will have at least seven-ish bits but anything more is platform-specific. This is one of the few systems I’ve used where I have to cast from an int to an int because the compiler can’t prove that the ints are the same size and might need a runtime cast, but it can’t tell me whether it does need the runtime cast.Of course, I don’t expect you to accept this example, given what a whiner you’ve been down-thread, but at least you can’t claim that nobody showed you anything.
Bravo, you found an example!
You’re right, we should start using #define INT32 again…
Incompatible? It is for cross platform code. Wtf are you even talking about
Okay, then give me an example where this matters. If an int hasn’t the same size, like on a Nintendo DS and Windows (wildly incompatible), I struggle to find a use case where it would help you out.
You can write code that is dependent on using a specific width of data type. You can compile code for different platforms. I have no idea what you’re thinking when you say “wildly incompatible”, but I guarantee you there is code that runs on both Nintendo DS and Windows.
Well cite me one then. I mean there are super niche stuff that could theoretically need that, but 99.99% of software didn’t, and now don’t even more. IMO.
Have you never heard of the concept of serialization? It’s weird for you to bring up the Nintendo DS and not be familiar with that, as it’s a very important topic in game development. Outside of game development, it’s used a lot in network code. Even javascript has ArrayBuffer.
I’ve personally built small homebrew projects that run on both Nintendo DS and Windows/Linux. Is that really so hard to imagine? As long as you design proper abstractions, it’s pretty straightforward.
Generally speaking, the best way to write optimal code is to understand your data first. You can’t do that if you don’t even know what format your data is in!
What on earth did you run on a DS and windows? I’m curious! BTW we used hard coded in memory structures, not serialising stuff, you’d have a hard time doing that perfectly well on the DS IMO.
Still only a small homebrew project so IMO my point still stands.
As for understanding your data, you need to know the size of the int on your system to set up the infamous INT32 to begin with!
I’m done spending time on this. If you are so insistent on being confidently incorrect then have at it.
Lol
The standard type aliases like
uint64_tweren’t in the C standard library until C99 and in C++ until C++11, so there are plenty of older code bases that would have had to define their own.The use of
#defineto make type aliases never made sense to me. The earliest versions of C didn’t havetypedef, I guess, but that’s like, the 1970s. Anyway, you wouldn’t do it that way in modern C/C++.I’ve seen several codebases that have a typedef or using keyword to map uint64_t to uint64 along with the others, but _t seems to be the convention for built-in std type names.
Iirc, _t is to denote a reserved standard type names.