Hacker News new | past | comments | ask | show | jobs | submit | AdieuToLogic's comments login

> Unfortunately there is no way to block websites at the network level (that I know of) as browsers and mobile phones have started using hardcoded DNS resolvers, so the utility of this is limited.

Any network traffic which goes through a gateway under your control can be controlled. DNSSEC[0] can make this more difficult, true, but not impossible as content ultimately originates from an IPv4/IPv6 address and can be dropped by upstream network devices.

0 - https://en.wikipedia.org/wiki/Domain_Name_System_Security_Ex...


Here is a gradated set of exercises to determine one's phone addiction, if any, in increasing levels of potential difficulty.

  1 - on an off day, with no reason to require phone use,
    put your phone in a dresser drawer for the day and
    do not use or look at it.

  2 - on an off day, with no reason to require phone use,
    put your phone in a dresser drawer for the day and
    leave your residence for at least one hour.

  3 - leave your phone at home when either meeting friends,
    getting lunch, or going to the grocery store.

  4 - leave your phone at home when going into the office
    for one day.

  5 - leave your phone in a dresser drawer for an entire
    weekend.

  6 - leave your phone at home when traveling for more
    than a day (vacation, visiting family, etc.).

I guess it's a good test for something, but I wouldn't call that something "phone addiction". I think wanting to be reachable by friends and family is fine and "addiction" starts when you start compulsively using your phone, like if you're scrolling through [insert social media here].

And phones are much more than content consumption machines - I like having a little pocket camera with me in case a see a new cat in the neighbourhood or something, and looking up bus schedules, renting city bikes, calling a cab, etc. are things I all but need to be able to do when I'm out.

My trick to almost never looking at my phone has been, somewhat ironically, having a smartwatch, as well as carefully curating the notifications I get on my phone. If I know I can't miss an important notification, I'll never even look at my phone, so there's no chance I even see one of those time wasting apps. And when a notification buzzes on my wrist, I can see in a fraction of a second if it's something really important or if it can wait.


These are very good. I take phone-free walks around the neighborhood, to the store, downtown for a festival. It feels weird at first, then it's nice.

I took an internet-free vacation last spring, and it was lovely.

While planning the trip, I made sure my old TomTom's built-in maps seemed accurate to what I was seeing online; there wasn't a lot of road-building activity there in the last decade or two. Then I turned off my phone and locked it in the glovebox, there in case of emergency.

Then I took a deep breath, started the car, and headed north.

It was awesome just knowing there was no way a notification could ding, nobody could call me, no news headline could pop up and harsh my mellow. Even if those things didn't actually happen constantly, simply existing in a state where they could was stressful, apparently, and turning the damn thing off was remarkably cathartic.


Reading your story brings joy to my heart, not for any reason other than I can see in my mind's eye what you describe. And it rocks!

Freedom is a gift, not from without, but found from within.

We set ourself free by our choices. And we shackle ourselves by same.


> in increasing levels of potential difficulty

Level 0 or 100 depending on the person: take your phone with you and just don't a) look at it every 5 minutes, b) reply to incoming messages instantly or c) check in to see what some pointless celebrity posted in the last 3 minutes.


The problem is I know that I am completely addicted, but I cannot stop. I feel like I'm the alcoholic drinking a bottle of vodka a day. I have tried to give up many times but I just can't crack it. Every time I have a good day the next day just slides right back into addiction. I probably average around 5-10 hours of pointless screen time a day (scrolling random youtube clips. Researching items I will never buy. Fantasizing about jobs I can never get. )

I have tried all kinds of blocking software and strategies. Blocking software, however elaborate, never seems to make a different. You find one way or another to get around the block and then after a while turning off the block just becomes part of your muscle memory. The most extreme thing I tried was cutting off the internet to my house and going back to a dumbphone for 6 months. For sure, I probably had less screen time. But I also spent many hours sitting in the station using the public wifi or watching hours and hours of pointless television.

This is a really tough nut to crack. I think there is probably no technological solution to it.


Addiction is not the problem. It is a (poor) solution to a problem. Figure what your underlying problem is and address it first. Without doing that, you are only taking away one solution with no alternative.

For me, I noticed I have no compulsion to surf after hanging out with friends where I have their attention and curiosity and they have mine. It is like an oxytocin surge that depletes overtime and needs recharging. Scrolling is like junk food in that it feels like a recharge but empties as soon as I stop.

I now call up a friend or arrange a hangout if I feel like I’m running low and it’s amazing how many friends are delighted to hear from me but then never reach out.


I think this is a helpful reframing, and I have spent time in my life trying to eliminate any possible issues: Improving nutrition, exercise, socialization etc. But my ability stay focused and work on tasks seems essentially random.

> watching hours and hours of pointless television

This is the thing; the brain is not actually comfortable just sitting idle with the reins slack. There's got to be some stimulus. I don't think there's any real solution other than finding a displacement activity. I know somebody who weaned themselves off smoking by developing a Gameboy Tetris addiction instead.

Other than going out and trying to be social, there's a whole range of "something to do with your hands" activities. If you take up knitting then at least at the end of it you have a scarf. Myself, I'm trying to train myself to open one of the language learning apps every time I think I'm spending time scrolling.


One solution to how easy it is to get around self-inflicted blocks could be to find someone that agrees to manage your phone using the parental features. Personally I haven't found someone who I feel I could trust with such a power over me. Maybe a solution would be to pay somebody.

Did you check yourself for adhd?

I was diagnosed with ADHD a few years ago as an adult. I take medication for it and try my best to apply strategies but it is hard going. I wrote down some simple todos at 9am this morning but it's the end of the day now and I've done maybe 30 minutes of focused work and the rest browsing the internet.

The confusing thing is sometimes I have days when I do manage to do work, but I can never see what I do differently on those days to other days.


Did you discuss this with psychiatrist/therapist?

At least for me this is the pattern I had before I had a good enough dose of meds.


I have spoken to a few therapists. I usually felt pretty good after speaking to them, maybe for a week or so but slip back into my old habits. Unfortunately, where I live therapy is not covered by health insurance so it's hard to afford.

Have you tried mounting the phone to a wall or something when at home?

I figure the accessibility of phones are what makes the mindless scrooling habit so dangerous.

I mean I keep my beer in the garage to not drink as much.


I also recommend getting an Apple Watch with cellular – that way you can still be reached for emergencies, while not having access to any social media or other distractions. Since I got an apple watch I find myself leaving the phone at home more often.

Wouldn’t a dumbphone work better for a fraction of the price?

Your dumbphone can't have your actual phone number as that SIM is in your iPhone, so it's no good for emergency notifications. The reality is that the vast majority of people can't actually use a dumbphone as their daily driver. Society has pushed us past that point.

This simply isn't true. Where I live every major operator offers multisim i.e. two (e)sims with the same number. It's primarily used for smartwatches, but they support phones as well.

A dumbphone does not have imessage, dictation, voice memos, timer, and other small things that makes life more convenient. That's why I prefer apple watch.

You could get a few of those with a HN style Rube Goldberg where you set up a VoIP you can call that will do voice to sms, call back timer, etc.

> I also recommend getting an Apple Watch with cellular – that way you can still be reached for emergencies ...

For people who realistically could require emergency contact (parents of minor children, family members with health risks, etc.) this is a wise recommendation.

However, for those not having these very genuine concerns, an Apple Watch with cellular connectivity (or equivalent device) could engender a placebo effect and mask withdrawal.


Agreed - I disabled all non-essential notifications (I don't need Slack pinging my wrist) and have found my watch actually helps me ditch the phone more easily.

I'm still "reachable", but the watch UX is annoying enough that I won't find myself scrolling X etc on it.


Looks like I just inadvertently skipped to level 4 every workday, due to working inside of a restricted area with lots of proprietary industrial stuff.

7. Keep your Nokia 3210 with you at all times

Very nice, will use it on my child, but this doesn't cover my case.

I have it as a wallet (those flip cases) so it is always with me. But it can stay in backpack for days without using it, except maybe for calls (to talk with parents after I don't call for weeks :D) and to pay for public transit (huge mess to charge nfc cards). I don't use social networks, chat software (sms excluded) at all, never even registered to fb, cant even remember when was the last time I installed any app.

I consider this a very sane use of phone. It is not addiction, rather satisfying addicted society that is pressuring me to use it.


> Very nice, will use it on my child, but this doesn't cover my case.

Thanks for sharing your perspective. I need to point out what I originally stated was:

  ... exercises to determine one's phone addiction, if any ...
Note the "if any" qualifier.

You express having no phone addiction and I have no reason to think otherwise. More importantly, I am not going to adjudicate as to yourself or anyone else.


Yes, sure, as i said, will practice it on my 15 years old... he probably cant do anything of stated :D

Paying with a watch is a nice alternative too.

Beyond not having the phone with you, I think the real measure is the number of times it's picked up and/or unlocked.


How does this work when you need a phone for 2fa?

> How does this work when you need a phone for 2fa?

See the stipulation of:

  on an off day, with no reason to require phone use
If you "need a phone for 2fa" then that qualifies as a "reason to require phone use."

> How does this work when you need a phone for 2fa?

Just out of curiosity, suppose you are not on-call for work and it is an observed holiday. Do you foresee the need for two factor authentication for non-work activities?

In other words, is 2fa a requirement for daily life?


One example would be Github for personal projects. There are several other use cases where the phone is a factor for logging into services.

Git pull a day before, git push a day later? Have we forgotten to do anything without a persistent internet connection 24/7? Or why we'd use a distributed version control system in the first place.

It was an example of a use case familiar to many here. Some people use Github for more than just git.

This is a good one. My phone is my memory. If I ever need to be without a phone, I 100% need to carry a notebook and pen. And likely a camera.

The hardest challenge is not using your phone when sitting on the toilet

Old good air freshener label reading

Dr. Bronner's soap is good also.

DILUTE! DILUTE! OK!


If you sit on the toilet long enough to have time to look at your phone, you should probably address that.

As an always-sitter, it’s always long enough.

If you never do, you’d hopefully be aware that it’s exceptional.

There is a word for those who believe they cannot live without something, go to whatever means necessary in order to obtain it, even knowing it is harmful, only to find what was once thought an escape is now a prison.

We prefer the term 'addict', thank you very much.

> The one thing I’ve found that works for me on my phone is the OneSec app.

Sometimes the simplest solution is the Luddite one; put the phone down and step away from it.

If this appears to be an insurmountable ask, or otherwise infeasible, I humbly suggest there is a greater concern to be addressed than what yet another app on the phone which cannot be distanced may remedy.


I agree, this is the pathway. For me, this is the tool I’ve found that works to nudge me down that pathway by adding extra friction to the routes to cheap, crap dopamine. Often an interruption from this app is accompanied by my brain going “huh, so what do you really want to use this time for?”.

It’s too true. If your problem is your phone, the solution won’t be found on your phone.

Speaking of make...

A while back I attended an open-source conference (which was a lot of fun). After the presentations, people would "set up shop" at tables and jam on whatever was their fancy.

One evening there was a person using make as a SAT solver[0]. That blew my mind to be honest. I had used make for years as a build tool and never thought of it in that problem space.

This memory isn't relevant to this project. I was just reminded of the experience is all.

0 - https://en.wikipedia.org/wiki/SAT_solver


> Mostly because you don’t know if it’s correct unless you know SQL. It’s entirely too easy to get results that look correct but aren’t ...

This is the fundamental problem when attempting to use "GenAI" to make program code, SQL or otherwise. All one would have to do is substitute SQL with language/library of choice above and it would be just as applicable.


Fully agree, I just harp on SQL because a. It’s my niche b. It always seems to be a “you can know this, but it doesn’t really matter” thing even for people who regularly interact with RDBMS, and it drives me bonkers.

> So the question is, can you hide the formal stuff under the hood, just like you can hide a calculator tool for arithmetic? Use informal English on the surface, while some of it is interpreted as a formal expression, put to work, and then reflected back in English?

The problem with trying to make "English -> formal language -> (anything else)" work is that informality is, by definition, not a formal specification and therefore subject to ambiguity. The inverse is not nearly as difficult to support.

Much like how a property in an API initially defined as being optional cannot be made mandatory without potentially breaking clients, whereas making a mandatory property optional can be backward compatible. IOW, the cardinality of "0 .. 1" is a strict superset of "1".


> The problem with trying to make "English -> formal language -> (anything else)" work is that informality is, by definition, not a formal specification and therefore subject to ambiguity. The inverse is not nearly as difficult to support.

Both directions are difficult and important. How do you determine when going from formal to informal that you got the right informal statement? If you can judge that, then you can also judge if a formal statement properly represents an informal one, or if there is a problem somewhere. If you detect a discrepancy, tell the user that their English is ambiguous and that they should be more specific.


LLMs are pretty good at writing small pieces of code, so I suppose they can very well be used to compose some formal logic statements.

It's an interesting presentation, no doubt. The analogies eventually fail as analogies usually do.

A recurring theme presented, however, is that LLM's are somehow not controlled by the corporations which expose them as a service. The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.

Also, the OS analogy doesn't make sense to me. Perhaps this is because I do not subscribe to LLM's having reasoning capabilities nor able to reliably provide services an OS-like system can be shown to provide.

A minor critique regarding the analogy equating LLM's to mainframes:

  Mainframes in the 1960's never "ran in the cloud" as it did
  not exist.  They still do not "run in the cloud" unless one
  includes simulators.

  Terminals in the 1960's - 1980's did not use networks.  They
  used dedicated serial cables or dial-up modems to connect
  either directly or through stat-mux concentrators.

  "Compute" was not "batched over users."  Mainframes either
  had jobs submitted and ran via operators (indirect execution)
  or supported multi-user time slicing (such as found in Unix).

Hang in there! Your comment makes some really good points about the limits of analogies and the real control corporations have over LLMs.

Plus, your historical corrections were spot on. Sometimes, good criticisms just get lost in the noise online. Don't let it get to you!


> The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.

I don't think that's what he said, he was identifying the first customers and uses.


>> A recurring theme presented, however, is that LLM's are somehow not controlled by the corporations which expose them as a service. The presenter made certain to identify three interested actors (governments, corporations, "regular people") and how LLM offerings are not controlled by governments. This is a bit disingenuous.

> I don't think that's what he said, he was identifying the first customers and uses.

The portion of the presentation I am referencing starts at or near 12:50[0]. Here is what was said:

  I wrote about this one particular property that strikes me
  as very different this time around.  It's that LLM's like
  flip they flip the direction of technology diffusion that
  is usually present in technology.

  So for example with electricity, cryptography, computing,
  flight, internet, GPS, lots of new transformative that have
  not been around.

  Typically it is the government and corporations that are
  the first users because it's new expensive etc. and it only
  later diffuses to consumer.  But I feel like LLM's are kind
  of like flipped around.

  So maybe with early computers it was all about ballistics
  and military use, but with LLM's it's all about how do you
  boil an egg or something like that.  This is certainly like
  a lot of my use.  And so it's really fascinating to me that
  we have a new magical computer it's like helping me boil an
  egg.

  It's not helping the government do something really crazy
  like some military ballistics or some special technology.
Note the identification of historic government interest in computing along with a flippant "regular person" scenario in the context of "technology diffusion."

You are right in that the presenter identified "first customers", but this is mentioned in passing when viewed in context. Perhaps I should not have characterized this as "a recurring theme." Instead, a better categorization might be:

  The presenter minimized the control corporations have by
  keeping focus on governmental topics and trivial customer
  use-cases.
0 - https://youtu.be/LCEmiRjPEtQ?t=770

Yeah that's explicitly about first customers and first uses, not about who controls it.

I don't see how it minimizes the control corporations have to note this. Especially since he's quite clear about how everything is currently centralized / time share model, and obviously hopeful we can enter an era that's more analogous to the PC era, even explicitly telling the audience maybe some of them will work on making that happen.


I took away from this a different message than what I think you did. I respect your perspective of same and that we respectfully disagree.

Just for fun, I wondered how small a canonical hello world program could be in macOS running an ARM processor. Below is based on what I found here[0] with minor command-line switch alterations to account for a newer OS version.

ARM64 assembly program (hw.s):

  //
  // Assembler program to print "Hello World!"
  // to stdout.
  //
  // X0-X2 - parameters to linux function services
  // X16 - linux function number
  //
  .global _start             // Provide program starting address to linker
  .align 2

  // Setup the parameters to print hello world
  // and then call Linux to do it.

  _start: mov X0, #1     // 1 = StdOut
          adr X1, helloworld // string to print
          mov X2, #13     // length of our string
          mov X16, #4     // MacOS write system call
          svc 0     // Call linux to output the string

  // Setup the parameters to exit the program
  // and then call Linux to do it.

          mov     X0, #0      // Use 0 return code
          mov     X16, #1     // Service command code 1 terminates this program
          svc     0           // Call MacOS to terminate the program

  helloworld:      .ascii  "Hello World!\n"

Assembling and linking commands:

  as -o hw.o hw.s &&
  ld -macos_version_min 14.0.0 -o hw hw.o -lSystem -syslibroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.2.sdk -e _start -arch arm64

Resulting file sizes:

  -rwxr-xr-x  1 <uid>  <gid>    16K Jun 18 21:23 hw
  -rw-r--r--  1 <uid>  <gid>   440B Jun 18 21:23 hw.o
  -rw-r--r--  1 <uid>  <gid>   862B Jun 18 21:21 hw.s
0 - https://smist08.wordpress.com/2021/01/08/apple-m1-assembly-l...

> A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values.

This is an anti-pattern which results in voluminous log "noise" when the system operates as expected. To the degree that I have personally seen gigabytes per day produced by employing it. It also can litter the solution with transient concerns once thought important and are no longer relevant.

If detailed method invocation history is a requirement, consider using the Writer Monad[0] and only emitting log entries when either an error is detected or in an "unconditionally emit trace logs" environment (such as local unit/integration tests).

0 - https://williamyaoh.com/posts/2020-07-26-deriving-writer-mon...


It's absolutely not an anti-pattern if you have appropriate tools to handle different levels of logging, and especially not if you can filter debug output by area. You touch on this, but it's a bit strange to me that the default case is assumed to be "all logs all the time".

I usually roll my own wrapper around an existing logging package, but https://www.npmjs.com/package/debug is a good example of what life can be like if you're using JS. Want to debug your rate limiter? Write `DEBUG=app:middleware:rate-limiter npm start` and off you go.


> It's absolutely not an anti-pattern if you have appropriate tools to handle different levels of logging, and especially not if you can filter debug output by area.

It is an anti-pattern due to what was originally espoused:

  I add logging code, that stays forever. For a major 
  interface, I'll usually start with INFO level debugging, to 
  document function entry/exit, with param values.
There is no value for logging "function entry/exit, with param values" when all collaborations succeed and the system operates as intended. Note that service request/response logging is its own concern and is out of scope for this discussion.

Also, you did not address the non-trivial cost implications of voluminous log output.

> You touch on this, but it's a bit strange to me that the default case is assumed to be "all logs all the time".

Regarding the above, production-ready logging libraries such as Logback[0], log4net[1], log4cpp[2], et al, allow for run-time configuration to determine what "areas" will have their entries emitted. So "all logs all the time" is a non sequitur in this context.

What is relevant is the technique identified of emitting execution context when it matters and not when it doesn't. As to your `npm` example, I believe this falls under the scenario I explicitly identified thusly:

  ... an "unconditionally emit trace logs" environment
  (such as local unit/integration tests).
0 - https://logback.qos.ch/

1 - https://logging.apache.org/log4net/index.html

2 - https://log4cpp.sourceforge.net/


I understand that you explained some exceptions to the rule, but I disagree with two things: the assumption of incompetence on the part of geophile to not make logging conditional in some way, and adding the label of "anti-pattern" to something that's evidently got so much nuance to it.

> the non-trivial cost implications of voluminous log output

If log output is conditional at compile time there are no non-trivial cost implications, and even at runtime the costs are often trivial.


> ... I disagree with two things: the assumption of incompetence on the part of geophile to not make logging conditional in some way ...

I assumed nothing of the sort. What I did was identify an anti-pattern and describe an alternative which experience has shown to be a better approach.

"Incompetence" is your word, not mine.

> ... and adding the label of "anti-pattern" to something that's evidently got so much nuance to it.

I fail to see the nuance you apparently can see.

>> the non-trivial cost implications of voluminous log output

> If log output is conditional at compile time there are no non-trivial cost implications, and even at runtime the costs are often trivial.

Cloud deployment requires transmission of log entries to one or more log aggregators in order to be known.

By definition, this involves network I/O.

Network communication is orders of magnitude slower than local I/O.

Useless logging of "function entry/exit, with param values" increases pressure on network I/O.

Unless logging is allowed to be lossy, which it never is, transmission must be completed when log buffers are near full capacity.

Provisioning production systems having excessive logging can often require more resources than those which do not excessively log.

Thus disproving:

> ... even at runtime the costs are often trivial.

When considering the implication of voluminous log output in a production environment.


You are very attached to this "voluminous" point. What do you mean by it?

As I said, responding to another comment of yours, a distributed system I worked on produced a few GB a day. The logs were rotated daily. They were never transmitted anywhere, during normal operation. When things go wrong, sure, we look at them, and generate even more logging. But that was rare. I cannot stress enough how much of a non-issue log volume was in practice.

So I ask you to quantify: What counts (to you) as voluminous, as in daily log file sizes, and how many times they are sent over the network?


> You are very attached to this "voluminous" point. What do you mean by it?

I mean "a lot" or more specifically; "a whole lot."

Here is an exercise which illustrates this. For the purposes here, assume ASCII characters are used for log entries to make the math a bit easier.

Suppose the following:

  Each log statement is 100 characters.
  Each service invocation emits 50 log statements.
  Average transactions per second during high usage is 200tps.
  High usage is on average 2 hours per day.

  100 x 50 x 200 x 60 x 60 x 2 = 7_200_000_000 = 7.2GB / day
> So I ask you to quantify: What counts (to you) as voluminous, as in daily log file sizes, and how many times they are sent over the network?

The quantification is above and regarding log entries being sent over a network - in many production systems, log entries are unconditionally sent to a log aggregator and never stored in a local file system.


7.2G/day doesn't sound terrible. And I'd reduce it by a factor of 25, since in normal operation (i.e., not looking into a problem) I would have either 2 log statements per call (entry, exit), or none at all. It might be more than 2, if I needed detailed logging.

But in normal usage, even in the scenario you describe your argument about log volume is not convincing.


BTW, your username is a bit too on-the-nose, given the way you are arguing, using "anti-pattern" as a way to end all discussion.

> BTW, your username is a bit too on-the-nose ...

Resorting to an ad hominem and non sequiturs are we? Really?

C'mon, you are better than that.


As I said, conditional. As in, you add logging to your code but you either remove it at compile time or you check your config at run time. By definition, work you don't do is not done.

Conditionals aren't free either, and conditionals - especially compile-time - on logging code are considered by some a bugprone anti-pattern as well.

The code that computes data for and assembles your log message may end up executing logic that affects the system elsewhere. If you put that code under conditional, your program will behave differently depending on the logging configuration; if you put it outside, you end up wasting potentially substantial amount of work building log messages that never get used.


This is getting a bit far into the weeds, but I've found that debug output which is disabled by default in all environments is quite safe. I agree that it would be a problem to leave it turned on in development, testing, or staging environments.

The whole concept of an “anti-pattern” is a discussion ender. It’s basically a signal that one party isn’t willing to consider the specific advantages and disadvantages of a particular approach in a given context.

> There is no value for logging "function entry/exit, with param values" when all collaborations succeed and the system operates as intended.

Well, I agree completely, but those conditions are a tall order. The whole point of debugging (by whatever means you prefer) is for those situations in which things don't succeed or operate as intended. If I have a failure, and suspect a major subsystem, I sure do want to see all calls and param values leading up to a failure.

In addition to this point, you have constructed a strawman in which logging is on all the time. Have you ever looked at syslog? On my desktop Linux system, output there counts as voluminous. It isn't so much space, or so CPU-intensive that I would consider disabling syslog output (even if I could).

The large distributed system I worked on would produce a few GB per day, and the logs were rotated. A complete non-issue. And for the rare times that something did fail, we could turn up logging with precision and get useful information.


>> There is no value for logging "function entry/exit, with param values" when all collaborations succeed and the system operates as intended.

> Well, I agree completely, but those conditions are a tall order.

Every successful service invocation satisfies "all collaborations succeed and the system operates as intended." Another way to state this is every HTTP `1xx`, `2xx`, and `3xx` response code produced by an HTTP service qualifies as such.

> The whole point of debugging (by whatever means you prefer) is for those situations in which things don't succeed or operate as intended.

Providing sufficient context in the presence of errors, or "situations in which things don't succeed or operate as intended", was addressed thusly:

  If detailed method invocation history is a requirement, 
  consider using the Writer Monad and only emitting log 
  entries when either an error is detected or in an 
  "unconditionally emit trace logs" environment (such as 
  local unit/integration tests).
> If I have a failure, and suspect a major subsystem, I sure do want to see all calls and param values leading up to a failure.

See above.

> In addition to this point, you have constructed a strawman in which logging is on all the time.

No, I addressed your original premise in the context of a production web application, where logging is configured during deployment.

See also your own contradiction by previously asserting, "I sure do want to see all calls and param values leading up to a failure."

So which is it?

Did I construct a strawman "in which logging is on all the time"?

Or do you "want to see all calls and param values leading up to a failure", which requires "logging is on all the time"?

> Have you ever looked at syslog?

This is a strawman. Syslog is a component for logging and has nothing to do with the programs which use it.

> The large distributed system I worked on would produce a few GB per day, and the logs were rotated. A complete non-issue.

If this is the same system you described in a different comment also in this thread, I identified a standard industry practice of log entries produced by application nodes in a production environment being unconditionally sent to a log aggregator and not stored in a local file system. The reasons for this are well documented.


I'm not sure what point you are making with your scenario involving HTTP response codes. What if the HTTP server crashes, and doesn't send a response at all?

I don't know from Writer Monads. But if you only emitting log entries on some future failure or request, then that's potentially a lot of logging to keep somewhere. Where? Log aggregator? Local files? Memory? What about log volume? Does this writer monad implement log rotation? It sounds like you are sweeping a lot of the things you object to under this writer monad rug.

Let me be real clear about all calls and param values leading up to a failure.

- In normal operation, turn logging off completely, or turn on some level that produces tolerable log volume, (it seems like your threshold is much lower than mine).

- When a failure occurs: Restart the service with more logging enabled, (hence the all calls an param values), so that you have logging when the failure occurs again.

About local logs vs a log aggregator: The system I worked on was a shared nothing archive. To add storage and metadata capacity, you add nodes. Each node also stored its own log files. I get that this may not be the answer all the time, and that a log aggregator is useful in some scenarios. However, even in that case, your concerns about log volume seem overblown to me.


> I'm not sure what point you are making with your scenario involving HTTP response codes.

My point was to identify how common the "happy path" scenario is and was in direct response to:

>> There is no value for logging "function entry/exit, with param values" when all collaborations succeed and the system operates as intended.

> Well, I agree completely, but those conditions are a tall order.

Regarding your question:

> What if the HTTP server crashes, and doesn't send a response at all?

Again, the HTTP status codes were used to illustrate the frequency of successful invocations. But to your point, if an HTTP server crashes then log entries for in-flight workflows would likely not be emitted. A similar possibility also holds for local file system logging as buffering exists (sometimes on multiple levels).

> I don't know from Writer Monads.

No worries. All it is is a formal definition of a type having specific capabilities called "laws" in the functional programming world.

> But if you only emitting log entries on some future failure or request, then that's potentially a lot of logging to keep somewhere. Where? Log aggregator? Local files? Memory?

What is meant by "future failure" is a failure potentially encountered during the evaluation of a single invocation. In the context of a HTTP server, this would be the handling of each submission to a specific HTTP endpoint and verb. This is often defined within an IO Monad[0], but does not have to be, and is out of scope for this discussion.

The answer to the rest of your questions is the deferred log entry definitions are held in memory for the duration of the single service invocation, with any log emissions produced transmitted to a network-accesible log aggregator via the logging component used.

> Let me be real clear about all calls and param values leading up to a failure.

The scenario you kindly shared is understandable, yet is one which has been unacceptable in teams I have worked with. Losing the "first error" is not an option in those environments.

0 - https://en.wikibooks.org/wiki/Haskell/Understanding_monads/I...


FWIW, it seems like poor man's tracing. You'd get that and a lot more having opentelemetry setup (using Jaeger for UI locally)

I know a lot of people do that in all kinds of software (especially enterprise), still, I can't help but notice this is getting close to Greenspunning[0] territory.

What you describe is leaving around hand-rolled instrumentation code that conditionally executes expensive reporting actions, which you can toggle on demand between executions. Thing is, this is already all done automatically for you[1] - all you need is the right build flag to prevent optimizing away information about function boundaries, and then you can easily add and remove such instrumentation code on the fly with a debugger.

I mean, tracing function entry and exit with params is pretty much the main task of a debugger. In some way, it's silly that we end up duplicating this by hand in our own projects. But it goes beyond that; a lot of logging and tracing I see is basically hand-rolling an ad hoc, informally-specified, bug-ridden, slow implementation of 5% of GDB.

Why not accept you need instrumentation in production too, and run everything in a lightweight, non-interactive debugging session? It's literally the same thing, just done better, and couple layers of abstraction below your own code, so it's more efficient too.

--

[0] - https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

[1] - Well, at least in most languages used on the backend, it is. I'm not sure how debugging works in Node at the JS level, if it exists at all.


A logging library is very, very far from a Turing complete language, so no Greenspunning. (Yes, I know about that Java logger fiasco from a few years ago. Not my idea.)

I don't want logging done automatically for me, what I want is too idiosyncratic. While I will log every call on major interfaces, I do want to control exactly what is printed. Maybe some parameter values are not of interest. Maybe I want special formatting. Maybe I want the same log line to include something computed inside the function. Also, most of my logging is not on entry/exit. It's deeper down, to look at very specific things.

Look, I do not want a debugger, except for tiny programs, or debugging unit tests. In a system with lots of processes, running on lots of nodes, if a debugger is even possible to use, it is just too much of a PITA, and provides far too miniscule a view of things. I don't want to deal with running to just before the failure, repeatedly, resetting the environment on each attempt, blah, blah, blah. It's a ridiculous way to debug a large and complex system.

What a debugger can do, that is harder with logging, is to explore arbitrary code. If I chase a problem into a part of my system that doesn't have logging, okay, I add some logging, and keep it there. That's a good investment in the future. (This new logging is probably at a detailed level like DEBUG, and therefore only used on demand. Obvious, but it seems like a necessary thing to point out in this conversation.)


I agree that logging all functions is reinventing the wheel.

I think there's still value in adding toggleable debug output to major interfaces. It tells you exactly what and where the important events are happening, so that you don't need to work out where to stick your breakpoints.


I don't quite like littering the code with logs, but I understand there's a value to it.

The problem is that if you only log problems or "important" things, then you have a selection bias in the log and don't have a reference of how the log looks like when the system operates normally.

This is useful when you encounter unknown problem and need to find unusual stuff in the logs. This unusual stuff is not always an error state, it might be some aggregate problem (something is called too many times, something is happening in problematic order, etc.)


> The problem is that if you only log problems or "important" things, then you have a selection bias in the log and don't have a reference of how the log looks like when the system operates normally.

A case can be made for only logging the steps performed up to and including an error. This even excludes logging "important things" other than those satisfying system/functional requirements (such as request/response audit logs).

It is reminiscent of "the Unix philosophy", but different in important ways, and is essentially:

  Capture what would be log entries if a there is a
  future unrecoverable error.

  If an error is encountered, emit *all* log entries starting
  from the earliest point (such as receiving an external
  event or a REST endpoint request) up to and including the
  information detailing the unrecoverable error.

  If the workflow succeeds, including producing an expected
  failed workflow response (such as a validation error),
  discard the deferred log entries.
What constitutes the deferred log entries accumulated along the way is specific to the workflow and/or domain model.

While using a functional programming language and employing referentially transparent[0] abstractions (such as the Writer Monad) usually makes implementing this pattern much simpler than when using an imperative language, it can be successfully done with the latter given sufficient discipline and the workflow implementation being referentially transparent[0].

An important complement to the above is to employ other industry standard verification activities, such as unit/feature/integration tests.

0 - https://en.wikipedia.org/wiki/Referential_transparency


I don't know what's "important" at the beginning. In my work, logging grows as I work on the system. More logging in more complex or fragile parts. Sometimes I remove logging where it provides no value.

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: