upvote
I see it the exact other way around:

- everyday bugs, just put a breakpoint

- rare cases: add logging

By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.

Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?

reply
So much this. Also in our embedded environment debugging is hit and miss. Not always possible for software, memory or even hardware reasons.
reply
Then you need better hardware-based debugging tools like an ICE.
reply
Rare 1% bugs practically require prints debugging because they are only going to appear only 6 times if you run the test 600 times. So you just run the test 600 times all at once, look at the logs of the 6 failed tests, and fix the bug. You don’t want to run the debugger 600 times in sequence.
reply
Record-and-replay debuggers like rr and UndoDB are designed for exactly this scenario. In fact it's way better than logging; with logging, in practice, you usually don't have the logs you need the first time, so you have to iterate "add logs, rerun 600 times" several times. With rr and UndoDB you just have to reproduce once and then you'll be able to figure it out.
reply
I'm not going to manually execute the bug in a test once if it is 1% (or .1%, which I often have to deal with also). I'm going to run it 600, 1200, or maybe even 1800 times, and then pick bug exhibitors to dissect them. I can imagine that these could all be running under a time travel debugger that just then stops and lets me interact when the bug is found, but that sounds way more complicated than just adding log messages and and picking thru the logs of failures.
reply
Trace points do exist.
reply
conditional breakpoints, watches, …
reply
... will sometimes make the race condition not occur because things are too slow.

Like the bugs "that disappear in a debug build but happen in the production build all the time".

reply
The tricky race conditions are the ones you often don't see in the debugger, because stopping one thread makes the behavior deterministic. But that aside, for webapps I feel it's way easier to just set a breakpoint and stop to see a var's value instead of adding a print statement for it (just to find out that you also need to see the value of another var). So given you just always start in debugging mode, there's no downside if you have a good IDE.
reply
Using a debugger isn't a synonymous with single stepping.
reply
Even just the debugger overhead can be enough to change the behavior of a subtle race condition
reply
> The rare ones show up maybe 1% of the time

Lucky you lol

What I've found is that as you chew through surface level issues, at one point all that's left is messy and tricky bugs.

Still have a vivid memory of moving a JS frontend to TS and just overnight losing all the "oh shucks" frontend bugs, being left with race conditions and friends.

Not to say you can't do print debugging with that (tracing is fancy print debugging!), but I've found that a project that has a lot of easy-to-debug issues tends to be at a certain level of maturity and as times goes on you start ripping your hair out way more

reply
Absolutely. My current role involves literally chasing down all these integration point issues - and they keep changing! Not everything has the luxury of being built on a stable, well tested base.

I'm having the most fun I've had in ages. It's like being Sherlock Holmes, and construction worker all at once.

Print statements, debuggers, memory analyzers, power meters, tracers, tcpump - everything has a place, and the problem space helps dictate what and when.

reply
The easy-to-debug issues are there because I just wrote some new code, didn't even commit the code, and is right now writing some unit tests for the new code. That's extremely common and print debugging is alright here.
reply
Unit and integration testing for long-term maintainable code that's easy and quick to prove it still works, not print debugging with laborious, untouchable, untestable garbage.
reply
I used to agree with this, but then I realized that you can use trace points (aka non-suspending break points) in a debugger. These cover all the use cases of print statements with a few extra advantages:

- You can add new traces, or modify/disable existing ones at runtime without having to recompile and rerun your program.

- Once you've fixed the bug, you don't have to cleanup all the prints that you left around the codebase.

I know that there is a good reason for debugging with prints: The debugging experience of many languages suck. In that case I always use prints. But if I'm lucky to use a language with good debugging tooling (e.g Java/Kotlin + IntelliJ IDEA), there is zero chance to ever print for debugging.

reply
TIL about tracepoints! I'm a bit embarrassed to admit that I didn't know these exist, although I'm using debuggers on a regular basis facepalm. Visual Studio seems to have excellent support for message formatting, so you can easily print any variable you're interested in. Unfortunately, QtCreator only seems to support plain messages :-(
reply
I've had far better luck print debugging tricky race conditions than using a debugger.

The only language where I've found a debugger particularly useful for race condition debugging is go, where it's a lot easier to synthetically trigger race conditions in my experience.

reply
Use trace points and feed the telemetry data into the debugger for analysis.
reply
Somehow I've never used trace points before, thanks!
reply
Even print debugging is easier in a good debugger.

Print debugging in frontend JS/TS is literally just writing the statement "debugger;" and saving the file. JS, unlike supposedly better designed languages, is designed to support hot reloading so often times just saving the file will launch me into the debugger at the line of code in question.

I used to write C++, and setting up print statements, while easier than using LLDB, is still harder than that.

I still use print debugging, but only when the debugger fails me. It's still easier to write a series of console.log()s than to set up logging breakpoints. If only there was an equivalent to "debugger;" that supported log and continue.

reply
> JS (...) is designed to support hot reloading

no it's not lol. hmr is an outrageous hack of the language. however, the fact JS can accommodate such shenanigans is really what you mean.

sorry I don't mean to be a pedantic ass. i just think it's fascinating how languages that are "poorly" designed can end up being so damn useful in the future. i think that says something about design.

reply
ESM has Hot Module Reloading. When you import a symbol it gives you a handle to that symbol rather than a plain reference, so that if the module changes the symbol will too.
reply
It's not a feature of the language was my point, not that it's not possible.
reply
Well, if you have a race condition, the debugger is likely to change the timing and alter the race, possibly hiding it altogether. Race conditions is where print is often more useful than the debugger.
reply
The same can be said about prints.
reply
Yes, but to a lesser extent.
reply
> the debugger is likely to change the timing

And the print will 100% change the timing.

reply
Yes, but often no where as drastic as the debugger. In Android we have huge logs anyways, a few more printf statements aren’t going to hurt.
reply
Log to a memory ring buffer (if you need extreme precision, prefetch everything and write binary fixed size "log entries"), flush asynchronously at some point when you don't care about timing anymore. Really helpful in kernel debugging.
reply
Formatting log still takes considerable computing, especially when working on embedded system, where your cpu is only a few hundreds MHz.
reply
You don't need to format the log on-device. You can push a binary representation and format when you need to display it. Look a 'defmt' for an example of this approach. Logging overhead in the path that emits the log messages can be tens of instructions.
reply
Hence the mention of binary stuff.... We use ftrace in linux and we limit ourselves a lot on what we "print".
reply
No, wrong. Totally wrong. You're changing the conditions that prevent accurate measurement without modification. This is where you use proper tools like an In-Circuit Emulator (ICE) or its equivalent.
reply
I think you have a specific class of race conditions in mind where tight control of the hardware is desirable or even possible.

But what to do if you have a race condition in a database stored procedure? Or in a GUI rendering code? Even web applications can experience race conditions in spite of being "single-threaded", thanks to fetches and other asynchronous operations. I never heard of somebody using ICE in these cases, nor can I imagine how it could be used - please enlighten me if I'm missing something...

> You're changing the conditions that prevent accurate measurement without modification.

Yes, but if the race condition is course-enough, like it often is in above cases, adding print/logging may not change the timings enough to hide the race.

reply
Fully agree.

If I find myself using a debugger it’s usually one two things: - freshly written low level assembly code that isn’t working - basic userspace app crash (in C) where whipping out gdb is faster than adding prints and recompiling.

Even never needed a debugger for complex kernel drivers — just prints.

reply
I guess I struggle to see how it's easier to print debug, if the debugger is right there I find it way faster.

Perhaps the debugging experience in different languages and IDEs is the elephant in the room, and we are all just talking past eachother.

reply
Indeed, depends on deployment and type of application.

If the customer has their own deployment of the app (on their own server or computer), then all you have to go with, when they report a problem, are logs. Of course, you also have to have a way to obtain those logs. In such cases, it's way better for the developers to also never use debugger, because they are then forced to ensure during development that logs do contain sufficient information to pinpoint a problem.

Using a debugger also already means that you can reproduce the problem yourself, which is already half of the solution :)

reply
One from work: another team is willing to support exactly two build modes in their projects: release mode, or full debug info for everything. Loading the full debug info into a debugger takes 30m+ and will fail if the computer goes to sleep midway through.

I just debug release mode instead, where print debug is usually nicer than a debugger without symbols. I could fix the situation other ways, but a non-reversible debugger doesn't justify the effort for me.

reply
Exactly. At work for example I use the dev tools debugger all the time, but lldb for c++ only when running unit tests (because our server harness is too large and debug builds are too large and slow). I’ve never really used an IDE for python.

When using Xcode the debugger is right there and so it is in qt creator. I’ve tried making it work in vim many times and just gave up at some point.

The environment definitely is the main selector.

reply
> the rare, tricky race conditions [...]. The rare ones show up maybe 1% of the time—they demand a debugger,

Interesting. I usually find those harder to debug with a debugger. Debuggers change the timing when stepping through, making the bug disappear. Do you have a cool trick for that? (Or a mundane trick, I'm not picky.)

reply
It is also much much easier to fix all kinds of all other bugs stepping through code with the debugger.

I am in camp where 1% on the easy side of the curve can be efficiently fixed by print statements.

reply
The real question is, why do we (as an industry) not use testing frameworks more to see if we could replicate those rare obscure bugs? If you can code the state, you now can reproduce it 100% of the time. The real answer seems to me, is that the industry isn't writing any or enough unit tests.

If your code can be unit tested, you can twist and turn it in many ways, if it's not an integration issue.

reply
> Leave us be. We know what we’re doing.

No shade, this was my perspective until recently as well, but I disagree now.

The tipping point for me was the realisation that if I'm printing code out for debugging, I must be executing that code, and if I'm executing that code anyway, it's faster for me to click a debug point in an IDE than it is to type out a print statement.

Not only that, but the thing that I forgot to include in my log line doesn't require adding it in and re-spinning, I can just look it up when the debug point is hit.

I don't know why it took me so long to change the habit but one day it miraculously happened overnight.

reply
> it's faster for me to click a debug point in an IDE than it is to type out a print statement

Interesting. I always viewed the interface to a debugger as its greatest flaw—who wants to grapple with an interface reimplementing the internals of a language half as well when you can simply type, save, commit, and reproduce?

reply
Depends on your language, runtime, dev tooling.

I'm using IntelliJ for a Java project that takes a very long time to rebuild, re-spin and re-test. For E2E tests a 10-minute turn-around time would be blazingly fast.

But because of the tooling, once I've re-spun I can connect a debugger to the JVM and click a line in IntelliJ to set a breakpoint. Combined, that takes 5 seconds.

If I need to make small changes at that point I can usually rebuild it out exactly in the debugger to see how it executes, all while paused at that spot.

reply
> who wants to grapple with an interface reimplementing the internals of a language half as well when you can simply type, save, commit, and reproduce?

i do, because it's much faster than typing, saving, and rebuilding, etc.

reply
Often you can also just use conditional breakpoints, which surprisingly few people know about (to be clear, it's still a breakpoint, but your application just auto continues if false. Is usually usable via right click on the area you're clicking on to set the breakpoint.
reply
This post exactly.
reply
I don't see any evidence that the 1% of bugs can be reduced so easily. A debugger is unsuitable just as often as print debugging is. There is no inherent edge it gives to the sort of reasoning demanded. It is just a flathead rather than a phillips. The only thing that distinguishes this sort of bug from the rest is pain.
reply
When the print statements cause a change in asynchronous data hazards that leads to the issue disappearing, then what's the plan since you appear to "know it all" already? Perhaps you don't know as much as you profess, professor.
reply
> Leave us be. We know what we’re doing.

No. You’re wrong.

I’ll give you an example a plain vanilla ass bug that I dealt with today.

Teammate was trying to use portaudio with ALDA on one of cloud Linux machines for CI tests. Portaudio was failing to initialize with an error that it failed to find the host api.

Why did it fail? Where did it look? What actual operation failed? Who the fuck knows! With a debugger this would take approximately 30 seconds to understand exactly why it failed. Without a debugger you need to spend a whole bunch of time figuring out how a random third party library works to figure out where the fuck to even put a printf.

Printf debugging is great if it’s within systems you already know inside and out. If you deal with code that isn’t yours then debugger is more then an order of magnitude faster and more efficient.

It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.

reply
The hardest bug I had to track down took over a month, and a debugger wouldn't have helped one bit.

On the development system, the program would only crash, under a heavy load, on the order of hours (like over 12 hours, sometimes over 24 hours). On the production system, on the order of minutes (usually less than a hour). But never immediately. The program itself was a single process, no threads what-so-ever. Core dumps were useless as they were inconsistent (the crash was never in the same place twice).

I do think that valgrind (had I known about it at the time) would have found it ... maybe. It might have caught the memory corruption, but not the actual root cause of the memory corruption. The root cause was a signal handler (so my "non-threaded code" was technically, "threaded code") calling non-async-safe functions, such as malloc() (not directly, but in code called by the signal handler). Tough lesson I haven't forgotten.

reply
Ok? A debugger also wouldn’t help the hardest bug I ever fixed!

It is not the only tool in the bag. But literally the first question anyone should ask when dealing with any bug is “would attaching a debugger be helpful?”. Literally everyone who doesn’t use a debugger is less effective at their jobs than if they frequently used a debugger.

reply
> It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.
reply
:eyeroll:

I use logs and printf. But printf is a tool of last resort, not first. Debugging consideration #1 is “attach debugger”.

I think the root issue is that most people on HN are Linux bash jockeys and Linux doesn’t have a good debugger. GDB/LLDB CLI are poop. Hopefully RadDebugger is good someday. RadDbg and Superluminal would go a long long way to improving the poor Linux dev environment.

reply