Emacs is a relic, it can take weeks of configuring to get it anywhere near as productive as something like VS Code or Jetbrains IDEs. If you want to procrastinate for a few months it's the perfect tool.
People were saying that when I started using it, over 20 years ago. Apparently I was supposed to be using Notepad++ or some MS program du jour which no one remembers now. I wouldn't be surprised if I keep using it for the rest of my career.
> it can take weeks of configuring
More like decades ;)
> to get it anywhere near as productive as something like VS Code or Jetbrains IDE
Nope - it's significantly more productive. At least for me - I can't speak for you, but I do seem to be much faster at processing text than any of my colleagues who are using those other editors.
All the configuring is by design. You can set up the editor to take advantage of your strengths and paper over your weaknesses.
Processing text is like 10% of time and 5% of effort of software development. emacs fails at basic core IDE features like semantic location of variable declaration and usage out of the box
This can only work if the editor internalizes the corresponding language. That does not scale for all languages available.
That’s exactly what the language server protocol is all about (and certainly what parsers like tree sitter are there for). Emacs can connect to those and acquire semantic analysis capabilities. The degree to which such a semantic language features will work for you will also depend strongly on the language, of course.
If you use a commercial editor it’ll jump start you for the popular languages. But, say, you wanted to program in Raku or Scheme you’ll also going to need to fiddle around.
Unfortunately for the estimated 0.18%-language-share's worth of Raku devs it does not appear that finding references is supported: https://langserver.org/#implementations-server. So not even a great solution for the ~5% of languages that don't have dedicated commercial editors that actually just work instead of requiring devs to do extensive proprietary meta-programming to achieve a half-functioning IDE facsimile
if you happen to use "eglot.el". (There are two major language server clients available for Emacs.)
This will not turn off "finding references" completely because Emacs happens to have a default backend called "etags-xref-backend" based on a reverse index file. You can easily generate that in your project directory using a shell tool shipped with Emacs or even execute that automatically from inside a git commit hook. So Emacs can stand in until Raku's language server is fixed.
So ... while you would be in trouble with a commercial editor now, you can continue hacking happily thanks to Emacs' flexibility.
It's not happy hacking to have to think about the internal configuration of how broken my IDE's language support is. Emacs is cool for what it is but it's objectively deficient for modern professional development
You seem to have a peculiar definition of "professional development", that does some lifting there and that you are not sharing with us. There are many people here, who do use Emacs when developing in their profession.
I agree but also feel like there is so much to improve in editor spaces but current “defaults” are just good enough. I like the keyboard example - current staggered keyboard layout is only used because we had physical typewriters that needed staggered keys because of physical constraints. But our fingers aren’t really good at going left and right, even slight movement means that you need to move your whole wrist left or right. Also keeping your wrists so close to each other is painful and not natural, definitely less comfortable than keeping them at shoulder distance. But the default is just good enough and too strong. We have split ortho keyboard but they are niche. So any revolution in editors will be stopped by the fact that too many people are comfortable enough with current default. And I’m honestly hungry for some revolution here
> And I’m honestly hungry for some revolution here
Nicely put.
Yes, I think there is room for a lot of innovation yet in many areas: OS design, programming language design, input devices, GUIs, and much more besides.
Saying that, when implemented well, there's a lot of stuff that's pretty good.
I think one aspect that's neglected is that yes, some tech we all use is very old and legacy-inspired. But often, it's not that we use some ancient thing because it's ancient and it hasn't changed. It's because it had lots of competitors but it beat them all.
For instance, QWERTY. Yes it is very old and yes it came from typewriters.
But I own 2 computers with
ABCDEF
GHIJKL
MNOPQR
STUVWX
YZ
... layouts. There are also QWERTZ and AZERTY used by millions. Dvorak failed to its inventors' deep misery. (No, not Dvořák, although Dvorak was a descendant of Dvořák.)
There have been tonnes of others tried.
The survivors are the ones that beat out the competition.
And in some instances, the competition survives and does OK in its little niche, and that is fine, too...
It's not exactly correct to compare Emacs to an IDE. I'm not a coder (even though I write some scripts from time to time), but still Emacs is my working instrument. The main use cases are org-mode and knowledge base with org-roam. You can't get it with VS Code.
> The reason for testing is getting ahead of issues before the user finds them.
The customer always uses software in some unexpected way, but we don't defer all "testing" to just watching the customer interact.
Testing is to give developers the fastest, most meaningful, most actionable feedback about their code changes so that they can make better decisions.
Customer issues are not the only things we're worried about. And software seems to be usable and make plenty of money even when users see something screwy once in a while.
I've seen code bases where a lot of the tests are actually testing the underlying framework's code (i.e. dotnet framework) after they mocked out all the dependent services. In my mind unit tests are a waste of time except for testing really complex algorithms where the there are no side effects. Automated end to end QA testing is where the effort should be spent.
Unit tests are mostly for hitting branching logic within your prodedure/method/function (and private ones in your codebase that it calls but can't be tested independently, e.g. they're private), and so are naturally paired with code coverage. If your percent coverage is high and someone changes the implementation, a drop in coverage and/or failure of one or more unit tests can signal that some assumptions about the behavior may need additional consideration.
End-to-end testing is important too, but it may be difficult to ferret out tricky bugs if unit tests and coverage for your modules (or whatever your lang/kit calls them) don't tell their own story about what code is being exercised and how.
They could have just used different Java/Python libraries and changed the architecture for the same result. The choice of a Programming Language alone has minimal impact on performance at that scale.
> They could have just used different Java/Python libraries and changed the architecture for the same result.
Yep, their blog post is carefully worded to say "The combined effect of better architecture and Elixir" but they didn't mention how much of it is related to architecture or what specifically they did with Elixir to make things faster. It feels like a marketing piece for their consulting services.
I mean they put:
> Rewrote an #AWS APIGateway & #lambda service that was costing us about $16000 / month in #elixir. Its running in 3 nodes that cost us about $150 / month
They saved 100x here by moving from an expensive architecture (Serverless lambdas) to potentially reserved instances which are reasonably affordable, at least for cloud standards.
I remember once needing to parse XML in Python. I started with the easy approach of using the first XML parsing library I found which was xmltodict. Eventually I stumbled upon lxml which improved overall performance by 20x and I didn't have to rewrite much code at all. Sometimes it's easy to get big wins in your existing language if you know what the problem is.
Rewrite would give you a more optimized code almost always, because you know what you are writing.
Though the article says that they rewrote the notification system, and erlang/elixir is pretty amazing for that stuff. From the point "memory footprint per long-running connection".
I mean there are plenty of examples in the wild left and right.
Have you ever seen basic db optimization? Alone in my companies people were just using stuff wrong.
Performance and Architecture are a after thought in our industry. A normal developer doesn't think about it.
There was one query in my company which was running in one region slower than in another one and there was also an explain statment available. No one looked at the explain statement and thought "huh why does this simple select use so much memory". People weretrying to see why the regions themselfs were different not what the problem with the query was.
No one's going to seriously propose using Node or MS for HFT. Or even use javascript to make something as heavy as a videogame streaming server. It would be too slow. Even if you did do something so quixotic as to build an HFT server in Node, no hedge fund or investment bank is going to switch to a Node based HFT communication system. They'd just be agreeing to be noncompetitive.
I guess what I'm saying is that where it counts, I almost always recommend real time libraries in Java or C++. Rust, is not there yet. Hospitals can't risk a patient's life on it yet by having the PACS system, or God forbid, the modalities themselves dependent on the newest Rust release. Or even worse, Node. It's way too risky.
Most people on HN don't know or hear about work in these kinds of fields. So we think Node is a great and effective tool for solving server side problems. Because it solves most of the problems we see. It's not. It's a great easy tool for solving server side problems. And we'll all make far better CTO's in the future if we recognize the difference, and the revenue opportunity extant, between easy and effective.
I have a hypothesis that 1/8 of fatherhood is treating the most interesting things about your life like a FOIA request is needed to get you to tell them.
My Dad may be almost 102, but let me assure you he is not confused. Also, Edward Teller was a good friend of my Dad and I remember him well.
My Dad worked with and knew many famous people like John von Neumann, Einstein, etc., so this anecdote was just another experience for him.
EDIT: I was saying that he was not confused about this story. I notice that my Dad is no longer as facile using computers. For years his hobby was writing screenplays, adding 3D animation, video segments of friends and family, and he had us all help with voices. He has mostly stopped doing that in the last year of two. I am not judging him at all: I am only in my 70s and I do much less technical work than I used to. My Dad has a saying: getting old sucks, but it beats the alternative.
Wifi would have to go to Steve Jobs :-) Lucent was sitting on 802.11 (WaveLAN) for ten years selling super expensive products targeting niche markets and it took Apple to move things forward. More in “Oral History of Arthur “Art” Astrin”, wifi pioneer: https://youtu.be/Tj5NNxVwNwQ
It definnitely takes some set up to make it work, but once working, it is pretty reliable. It requires an occasional merge conflict resolution, and setting the right files to be ignored.
I love that it is just folder with markdown files.