Hacker News new | past | comments | ask | show | jobs | submit | dwc's comments login

> The right comparison is with the distance at which this photo was taken: 18,000 miles, or about half an hour from closest approach.

Implicit in this statement: the camera was pointing close to the direction of travel and the target wasn't moving much within the field of view. Hugely easier than trying to image from the side at closest approach, which at best would have given a smeared image and at worst a complete miss.

It's still impressive and awesome. But always try to skew the odds of success in your favor when dealing with stuff like this. You have one pass and then the opportunity is gone.


> Hugely easier than trying to image from the side at closest approach

True that. AFAIK the closest-approach images have not yet been downloaded.


> the language isn’t CLOS all the way down

I'm curious what you mean by this, why it's needed or would be a good thing, etc. As a multi-paradigm language, I'm not seeing why CL should have a particular paradigm "all the way down".


It's good because it offers the opportunity to simplify and rationalize the type system and associated protocols without losing features.

In the early 1990s I worked on an experimental Newton OS written in Dylan. At that time, Dylan was still called "Ralph," and it was basically an implementation of Scheme in which all datatypes were CLOS classes. It was "CLOS all the way down."

Ralph offered substantially the same affordances and conveniences as Common Lisp, but with a simpler and more coherent set of APIs. Ralph was easier to learn and remember, and easier to extend.

To illustrate why, consider finite maps. The idea of a finite map is a convenient abstraction with a well-defined API. Common Lisp offers a couple of convenient ways to represent finite maps, and it's easy to build new representations of them, but there's no generic API for finite maps. Instead, each representation has its own distinct API that has nothing particularly to do with anything else.

By contrast, Ralph had a single well-defined API that worked with any representation of finite maps, whether built-in or user defined.

The upshot is a library of datatypes that is just as rich as Common Lisp's, but with a simpler and more coherent set of APIs, and an easy standard way to extend them with user-defined types that also support the same APIs.

There are signs in the Common Lisp standard that people were already thinking in that direction when the standard was defined. See the sequence functions, for example. Ralph, designed by Common Lisp and Smalltalk implementors, carried that thinking to its logical conclusion, and the result was something like a tidier Common Lisp.

Twenty-eight years later, Ralph is still my favorite of all the languages I've used for serious work. Its development system, Leibniz, remains my favorite development system. My favorite current tools are Common Lisp systems, but that's because I can't have Ralph and Leibniz anymore.


My favorite current tools are Common Lisp systems, but that's because I can't have Ralph and Leibniz anymore.

You said below that you don't find modern day Dylan to be as valuable. I don't know much about Dylan, either the pre-1992 version or the newer version(s), but I'm curious if you would elaborate on why the older Dylan was so much superior to modern Dylan in your view?


Because modern Dylan is not an old-fashioned Lisp.

I prefer working the old-fashioned Lisp way. I start my Lisp and tell it, an expression at a time, how to be the app I want. Modern Dylan doesn't work like that. It's much more a batch-compiled affair, where you write a lot of definitions and compile them all at once to yield an artifact.

Modern Dylan does not have a Lisp-style repl that you can use to gradually build up your app interactively, teaching the runtime new tricks, and incrementally querying it to examine what you've built--as I did when working on the Dylan Newton.

For a while, Bruce Michener and I discussed what it would take to restore that kind of support to OpenDylan, but in the end I concluded it was an impractical amount of work.


Gotcha. I've never really used OpenDylan, but I've had it on my "list of things to learn one day" for a while, so just curious about your take on that. I didn't realize that old Dylan had a lisp style REPL and that new Dylan doesn't.


What's your opinion on Julia given its Dylan heritage?


Julia is almost good enough for me to use, but not quite. I'd prefer an s-expression syntax, but that's not a deal killer for me. I like some other languages that don't have s-expression syntaxes, though I miss the easy text manipulation that s-expressions support.

I would want a convenient way to deliver a self-contained executable. If there's a simple way to do that with Julia, I don't know about it. I look for it periodically, but haven't found it. If it's there and I've simply overlooked it, then I might actually start using it regularly.

I have a few other nits, but they're just nits. On the whole, I think Julia's pretty nice.


One thing I forgot to mention is that I don't know whether Julia handles dynamic redefinitions gracefully.

What I mean is, for example, if I evaluate a new definition of an existing class, what happens to all the existing instances of that class? In Common Lisp, the old instances are now instances of the new class, and there is a runtime facility, defined in the language standard, for updating existing instances to ensure that they conform to the new definition.

If a language lacks facilities like that, then it's hard to work the way I prefer to work.

I guess I sort of expect that Julia will not have graceful support for redefinitions, because, generally speaking, the only people who even think of that feature are people who are intimately familiar with old-fashioned Lisp and Smalltalk systems, and they're sort of thin on the ground.

But maybe I'll be pleasantly surprised.


Julia doesn’t have classes. It relies on multi-methods primarily. Which for scientific computing is a much better fit, IMHO. That being said it’s possible to redefine pretty much any operator, including built in ones at the repl.


Your reply is a bit confusing, because multimethods are not an alternative to classes. Common Lisp and Dylan, for instance, offer both multimethods and classes.

Regardless, Julia does offer user-defined composite types. Can I redefine a composite type without halting the program in which it's being used? If so, what becomes of existing instances of the type?

If the answer to the first question is "yes," and if the answer to the second one is "the language runtime arranges for the existing instances to be updated to be instances of the redefined type," then Julia offers the kind of support for redefinition that I am accustomed to in Common Lisp. If not, then it doesn't.

EDIT: I dug around and answered my own question: Julia doesn't support redefining structs in the repl.

There's a project in progress (Tim Holy's Revise.jl) to add support for redefining functions in a session, and that project contains some discussion of how they might approach redefining structs.

Of course, the existence of the project and those comments implies that Julia does not currently support such redefinitions, and that answers my questions.

I did notice from the comments on some issues that those folks are aware that supporting redefinition of structs in the repl implies that existing instances may become orphans when their types are redefined, and there's some discussion of what to do about it. Common Lisp's solution--updating the existing instances to conform to the new definition--does not seem to have occurred to anyone.

That's not a big surprise. Why would such a feature occur to you unless you were consciously designing a system for building programs by modifying them as they run? Of course, that's exactly what old-fashioned Lisp and Smalltalk systems are designed for, but most people don't get much exposure to that style of programming.

I always end up missing those features when I don't have them, though, which is one reason I always end up going back to Common Lisp.


True, classes and multimethods aren’t exactly interchangeable. It’s just that I haven’t found classes useful (at least as done in Java/Python/C++ objects) as compared to the combination of multimethods and type specialization. At least in the context of scientific computing.

Does CL use virtual tables to implement CLOS? Always been curious about that. It seems CL must keep the state associated with redefined objects. How do you handle new fields and filling in values with CLOS?

It does appear you can’t redefine structs in the repl. Forgot about that point, though as you point out there doesn’t appear to be anything fundamental to prevent that from being changed in the future. I haven’t used Julia day-to-day much for a while, but hopefully the newer generation tools will add in the “old” features from CL and similar.

Have you ever tried CLASP?


CLOS classes are logically equivalent to structs. In fact, in some implementations, they are exactly the same. They are therefore useful in exactly the same ways and the same circumstances that structs are useful.

Maybe what you don't find useful is inheritance. I can see that. I'm not heavily invested in inheritance myself, though it can be useful in cases where you want a bunch of structs with some shared structure, or in cases where you want multimethods that share some behavior across a bunch of related types.

The terminology "virtual table" is commonly used with C++ and other languages that associate methods with classes. Each class in such languages has a hidden member that contains a pointer to a virtual method table used for method dispatch.

In CLOS, methods are associated with generic functions, not with classes, and are dispatched on any number of arguments. The standard specifies how generic functions and methods behave, but does not specify how they are to be represented, so the representation is an implementation-specific detail.

A naive toy representation might be a table associated with each generic function that maps sequences of types to methods. When the function is applied, Lisp computes the values and types of the arguments and finds the appropriate method for those types. I'm sure you can imagine the sorts of optimizations implementations apply to speed things up, including compiling monomorphic generic functions to simple function calls.

This is a bit of an oversimplification, because CLOS also provides a bunch of ways to control and customize how dispatch works--CLOS is less an object system than it is a system for building object systems.

When you redefine a class, CLOS automatically calls MAKE-INSTANCES-OBSOLETE, which arranges for all existing instances to be marked obsolete (it's up to the implementation to determine exactly what that means). When control touches an obsolete instance, the runtime calls UPDATE-INSTANCE-FOR-REDEFINED-CLASS with the instance, a list of added slots, a list of discarded slots, and a property list mapping the names of discarded slots to the values they had when they were discarded. If you've specialized UPDATE-INSTANCE-FOR-REDEFINED-CLASS for the case in question, the instance is reinitialized according to your specialized method, and things proceed as if it had the new type definition when it was instantiated.

If you haven't specialized UPDATE-INSTANCE-FOR-REDEFINED-CLASS then you'll end up in a breakloop. A breakloop is a repl session with read and write access to the call stack and the variable environment. The assumption is that you'll inspect the stack and environment, decide what UPDATE-INSTANCE-FOR-REDEFINED-CLASS needs to do, write that code, then invoke a restart that causes the halted function to resume execution as if your new definition had existed when it was originally called.

Again, the language is designed with the assumption that writing a program by modifying it while it runs is standard operating procedure. That being the case, the obvious thing to do when there isn't a relevant definition for UPDATE-INSTANCE-FOR-REDEFINED-CLASS is to offer you the chance to create one, and resume execution from there once you've created it.

I've examined CLASP a bunch of times. I keep meaning to mess with it, but I haven't yet.


>I would want a convenient way to deliver a self-contained executable.

It's a bit rough around the edges, but it does exist: https://github.com/JuliaLang/PackageCompiler.jl


Excellent; thank you! I'll take a look.


Uniformity, which is a really good thing. Surely you could say (class-of 3) or (class-of nil) or (class-of '(1 2 3)) but technically these values are not objects, like it is in, say, Scala, which is a real-world example of how good it is to have a uniform language (everything is an expression, every value is an instance of a class, and therefore everything is uniformly high-order, uniformly typed (unlike Java with distinctions of so-called primitive types) etc, etc.


Uniformity through imposing one paradigm on everything isn't attractive at all to me, especially for a paradigm I have no interest in using and avoid when I practically can.


Every value in CL is an instance of class. Some of those classes are built-in classes, which are restricted for performance reason. You do not inherit from Int in Scala either, since it is marked as "final", as far as I know.


In Scala an integer has methods, like everything else, unlike it is in Java and C++, and this is the point and the big deal.

    3 + 2 is actually 3.+(2) which is the right thing.


You can specialise generic functions on built-in classes in standard CL. Lisp methods are specialisations of generic functions; they don't belong to a class in the way methods do in eg C++. The issue you're talking about is that not all functions are generic functions in Common Lisp, and you can't specialise ordinary functions.

There's nothing stopping you from doing

    (defmethod add ((x number) (y number)) (+ x y))
    (defmethod add ((x string) (y string)) (concatenate 'string x y))
or whatever (multiple dispatch, too), and you could even call it + instead of ADD if you wanted (but not COMMON-LISP:+, so other code would continue to work; your packages could import your + instead of the standard one).


"+" is a function, what makes it "right" to be a method?


You mean, what makes it right to be a generic function that has methods?

First, + does dynamic dispatch based on the types of its arguments. It does different things when adding fixnums, vs. integers, vs. rationals, and so on, as well as a default method that signals a type error (in safe code). So it has methods, even if they aren't necessarily implemented as standard methods (but they could be).

Secondly, a user might want to make + work on other, user-defined classes (for example, if he user wanted it to work on a class representing quaterions). To make that work, the user would have to be able to add methods for those classes. One can imagine many CL builtins being implemented as generic functions to which users could add methods. This would be consistent with the standard.


a function is just a method that returns a value. Why make a special case for it? Of course you can go the other direction and allow functions to return nothing (or a representation of nothing like nil). That's fine too.


But would such a common lisp be better than something like Dylan?


It would not be better than circa-1992 Dylan. It would be better than present-day Dylan, though.

My opinion only, of course.


> And 10 years later, sharing stuff with people nearby still kinda sucks.

It does still suck, and once every few years I feel that. But that's a problem. Solving a minor, "kinda" pain point that people experience once in a while isn't that big of a win. I'm sure that other people want to do this more often than I do, but I'm also sure that this case is a small subset of the need to share stuff with others generally. I.e., you're always needing to share stuff, with people in Boston, LA, or Singapore, or your manager on another floor of the building, or wherever. You know how to do that. So while it seems silly that you can't share something more easily with someone sitting across the table from you, the methods you already use do work.


I grow blue java bananas in my yard. Rajapuri bananas are fairly popular around here.


Where is, 'around here' ?


Phoenix AZ area.


> It's unfortunate how much Google-hate is on HN these days because I think it's largely unjustified. There are definitely some bad (IMHO) leadership decisions but the rank-and-file are still culture carriers for a lot of the things that made Google great.

The fairly small number of people I know who are Googlers or Xooglers are all pretty awesome as techies and as people. That does little to change my opinion of Google itself. Sometimes it makes me even more cynical, thinking that management might take special care in internal messaging lest the rank-and-file revolt.

But from my perspective, from the outside, what Google does as a company is what counts for me and for society. All the good people inside don't ameliorate the external behavior of the company.


lest the rank-and-file revolt

The rank-and-file are extremely well compensated, with cash, with stock, with on-the-job perks. Why would they bite the hand that feeds them? A lot of people will put up with incredible violations of their principles before they'll risk losing a comfortable position.


Googlers will and have on multiple occassions revolted. Many have resigned when their personal red lines were crossed, usually over things the external world would not have noticed. Various top-down decisions have been reverted due to this, up to walking back an announcement that already went public.

Big part of the reason is that finding the next job after Google is nowhere near a hard problem. Chances are you might even get paid more by the next place.

Source: I'm a Googler that has signed some of these petitions, never got his red lines crossed and is still happy to work here.


This is one of many reasons why it is worthwhile to compensate well, at all levels: loyalty can be purchased.


Spoofed Caller ID is a lie, not anonymity.

Here's a proposal: 1) Caller may choose to hide/suppress their Caller ID, 2) mandatory option for phone carriers to allow Callee to completely block calls (no ring, no voicemail) that don't carry Caller ID, 3) When present Caller ID must be accurate.

The above allows anonymity but disallows deceit. It also provides opt-out for people not to receive anonymous calls. (Anonymity does not give you the right to have any given individual listen to you)

We could have had this for ages, as there are no great technical hurdles.


Note on above: there are legitimate reasons for businesses to set Caller ID to something other than the call origin. But the "something other" should be selected from a set under the control of the business, not a free for all. I.e., a desk phone with DID may show as the main company number, etc.

This takes a little more work to account for, but it shouldn't be a roadblock.


No, a phone should give the exact number for that exact phone, since that is whom I want to call back, should I need to. That phone may be manned by more than one individual (in case of shift work), but nobody should have to go through a phone tree.


This won't work, contact centres are more complicated that you'd imagine.


If we want authentication, public-private key pairs are a great idea.

A trusted network is fine, but are the telephone networks trusted? Can't get them to implement simple features in our interests, doesn't seem very much like a trusted third party to me. Meanwhile, OTT services work. Some let you whitelist contacts. Killer features like that will push everyone off POTS eventually anyway.

Inertia and legacy will keep holdouts using POTS, just like people still use fax machines, but they're mostly irrelevant.


> A trusted network is fine, but are the telephone networks trusted?

In the old days there was an assumption that if you were in the network then you were trusted, which was always dicey but made more sense when Ma Bell controlled everything tightly. Since the monopoly breakup that model was no longer true, and many problems can trace back to the nature of back patching security onto an entirely different model that no longer fits reality.


You start by saying that too much is blamed on leadership and then go on to describe the real reason which is a failure in leadership?

FWIW, I largely agree about the "cogs" idea. One particularly frustrating thing is seeing repeated failures and management fails to consult with workers about how it happened. Seeing the repeated problems the workers volunteer their insights to management about the underlying problems and possible solutions. But that info is either discarded (after "careful consideration") or warped to fit an existing but incorrect management narrative. The very idea that people doing the work could contribute anything meaningful beyond estimates doesn't seem to be palatable.


> One question I have is how sequestered the undersea carbon really is

Depends on how you interpret "sequestered". In the sense that it's locked away somewhere for a long time, like organic matter frozen into tundra, it's not very sequestered. But as long as there's lots of seaweed there'll be a lot of carbon taken up. Even though individual plants may die and release their carbon on a relatively short timeline, new growth will take up carbon.


Yes, correctness absolutely comes first.

One way to achieve greater simplicity is to negotiate for fewer/simpler requirements for the first revision. There's often a core set of functionality that can be implemented correctly in a simpler way, and that gets the work done. Once that's in place it's interesting to see how often people lose interest in what were "hard" requirements before. It's also common that new asks have little to no resemblance to those unimplemented features, and are instead things that they found out they needed after using the new system.


I like and use early exits when they fit well, which is often. And in this case I'd probably do as you suggest. But it's worth mentioning that even if you stick with if/else it's often worth it to put the simple case first to reduce mental load:

    if cache.contains(key) {
      return cache.get(key)
    } else {    
      v = db.query("..")
      expiry = calcexpiry(v)
      cache.put(k, v, expiry)
      return v;
    }


This example makes me whish an early return even more. Wouldn't it be sweet?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: