As a scientist, I always find it interesting how audiophiles prefer "real" (non ideal) electronics over digital simulations. It is not only the transmission functions of tubes, which can perfectly be simulated at sufficient cutoff frequencies in the digital domain. It is also the haptical handling of cables, the fact that you can probably even touch a system at some point and it does something unexpected. In the end, digital modeling comes only so far as it does. Unfortunately, neither skeuomorphic design nor AR/VR came so far to include all the effects of physical systems. Which is definitely in reach, with multi physics real time simulations at incredible spatial and temporal resolution...
Something which bothers me about coroutines is the pure boilerplate -- I don't even mean the ridiculous verbose coroutine definitions in C++20 but just the spread of "await" and "async" keywords all over the code, for instance in Python and Rust. I understand that these keywords are the compromise to introduce the feature into the language without paying for it when not using it, but I wonder if there is not a more concise way. Something not mentioned at all in the particle is the "what color is your function" problem (https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...) where I wonder whether there is a good solution.
Those aren't actual symmetrical coroutines though. Try the raw greenlet python module. A coroutine can yield to an arbitrary coroutine and not just to the event loop.
This looks simple not only because the article is written well but also because Go is the go-to-language for complex networking situations. Doing things in parallel, even pipelining? This would make quite some spaghetti algorithm in C/C++, even async rust/python world would not look so clean as in Go. This is clearly a big strength of the language.
> even async rust/python world would not look so clean as in Go.
That’s quite debatable and my experience is different. There is a whole lot of high level stuff that can be expressed with eg async streams and functional transformation chains in Rust, that Go has no counter offer for. Same for being able to use any future in select/join not just channels. Also I find cleanup / error handling in Rust much cleaner.
Agree with C and old style C++, but going off the flowchart [1] in the article this could be done quite cleanly with boost::asio and C++20 coroutines as well.
Unfortunately, BEAM VM is very slow compared to multiple other languages including Go. It's great when starting out with a few developers, however, since Go and Rust are much more performant, it is possible to hire several more Go or Rust developers with the server cost savings instead of being tied to Elixir. There is always a trade off between easy to use and high performance.
But, yeah, if I was going to bootstrap a startup at the seed level, Elixir is the best choice for backend. If I'm spending $500K+ a year on infrastructure, I'll be looking at Go and Rust.
> Unfortunately, BEAM VM is very slow compared to multiple other languages including Go.
for cpu bound tasks? sure. but we are talking in the context of networking. elixir is going to absolutely smoke go for applications requiring a lot of simultaneous connections. We can already see it in actionable vs phoenix channels. channels supports a magnitude order more simultaneous websocket connections per machine.
ps: libraries like rustler exist. you can take the cpu intensive stuff and offload it to a module written in rust when you really need to squeeze some perf out.
Elixir lacks the fine grain control for optimizing I/O which Go provides. Moreover, a bit torrent client requires CPU intensive hash verification, encryption/decryption, and data compression/decompression which will greatly slow down Elixir. You can build a connector to Rust or instead of using slow abstraction layers you can build the thing in Rust if you have the skills.
Rather then tell me, you are going to have to build a bit torrent client in Elixir and show me. Otherwise, after my years experience working with an Elixir team, I don't believe you.
As a decade long windows and linux desktop user, the mac OS X dock was one thing I was really envy. There are tons of bad copies of it for various linux desktops (KDE, Gnome, XFCE, you name it), they all don't get it. The software quality of the dock and window manager was something where OS X was, for a long time, years apart other GUIs. My feeling is the default setting made the dock less exciting in the last years.
Sorry, but the Mac DE sucks compared to modern ones like KDE. Why does my left-aligned dock change width every time I open a new program? Now every window I maximize has a slightly different width depending on how many programs were opened when I maximized it. Why can't I disable this feature? Why can't I get the dock to fill the height of the screen?
Why are programs still "open" when they have no open windows? Why am I able to cmd-tab to applications with no open windows and have nothing happen?
This is a MacOS thing - an app is not just windows, it is also a menu. The windows can all be closed and the menu is still there. Apps that don't behave like that feel like they are not proper Mac apps. As a seasoned Mac user, cmd+Q is your friend.
>The windows can all be closed and the menu is still there.
I'm a few macOS versions behind what's current, so it's possible things have changed, but historically in OS X/macOS your description applied to apps that could have multiple windows. Any app that could only have one window—for example, System Preferences—would quit when the one window was closed.
Any App with a menu generally works as I described. And that hails back to the System 7 days, and also NextStep was the same from what I remember (though I mostly used OpenStep.)
System Preferences (especially the iOS-ified version) breaks a lot of rules. The "classic" version is really similar to the System Prefs in Next/OpenStep. The classic version was a single document interface, and it used a back arrow to go back a level. The new one has more persistence IIRC with the list down one side.
I am sceptical and think this sounds a lot like bullshit.
Quantum entanglement as an abstract theory how the brain works was already discussed in the 1980s. This was always a non experimentally validable theory and therefore not really accepted in psychology.
(Elitist reference: I work in QC my partner in conciousness research.)
I perceived having "old" (co-)founders as a subtle but immanent disadvantage in the prototypical deep tech startup scene. When you talk to VC youngsters who barely started a university studies this is awkward at best and not successful in average. Things change dramatically when you talk to partners in the VC firms where the aged founder and partner meet at a similar age. This may sound trivial but these non-technical human interactions are quite dominating in my life, unfortunately. Age is as important as degrees or previous experience as founder.
Aye, ageism can creep up on us all if we assume age means disadvantage despite knowing that experience is a massive advantage. Many older founders have both age and degrees, and many even teach those that get degrees. Older folks typically know the market much better, having lived it, and are better at garnering trust if they aren't a complete muppet.
I watch this project since a few years and they make good progress. To whoever is interested in open source Computer Algebra Systems, there are of course plenty of more mature solutions. Classical ones such as GNU Octave or Maxima but also "modern ones" such as SAGEmath, Symbolics.jl or sympy. In particular, there is a broad range from symbolic libraries such as GiNaC up to "battery included" IDEs like SAGEmath. The community is vivid and amazing, for instance SAGEmath basically pioneered the web notebook interface which today brought us Jupyter in all its fashions.
I personally love the LISPy style of Mathematica (MMA) but of course it is not the (only) the core which makes MMA so powerful but the super large library which has not only instance industry-leading solutions for basic topics such as symbolic integration, 2D/3D graphics or finite element methods but also a plethora of special purpose domains such as bioinformatics. I guess Mathics has a good clone of the core but lacks, of course, all the libraries. It is, by the way, the same logic as with Matlab and its many "toolkits" compared to the numpy clone. However, the python movement brought many novel codes into the numpy world which no more work on Matlab.
You're right about the progress they've made: to me, this project's a really wonderful example of quietly chipping away at a project you love. I remember about... five years ago?... when the project was first launched, and I thought "hmm, that's nice - they've really nailed the symbolic evaluation engine, but let's see what happens". I'll have to remember their example every time I feel like starting a new project instead of chipping away at an old one...
Hmm, I was using it fairly heavily while going through a book which required loads of messing with graphs last year for a month maybe and I don't remember it acting up... Perhaps I got lucky? Is it known to be buggy, in general?
Yes it's terribly buggy. Incorrect answers sometimes depending on release versions and solve gets stuck all the time. I tend to use Xcas/GIAC instead where I can. Also bundled on my calculator (HP Prime).
It's so bad that the university I "loosely associate" with have their own patched version to fix a load of problems with it.
Indeed. The problem is that there is no regression suite clearly. Sometimes things get broken and then have to be fixed again. You end up in situation where you need to tell a person to use a specific version and hope they know how to get it working.
This makes communication, which is a really big part of mathematics, extremely difficult.
BTW, the current SBCL versions are much better than the stable/oldstable releases such as the ones for Debian or Ubuntu LTS.
Try Fricas, you might like it too.
On Maxima, beware, because your package manager might ship you a Maxima version built with generic and unnoptimized CL compilers.
GNU CLisp's performance against SBCL it's abysmal. On an n270 netbook, MCClim widgets built from Ultralisp (QuickLisp repo) run in realtime with SBCL. ECL it's a bit beter, closer to SBCL than CLISP. CLISP fails to compile due the lack of threading support on 32 bit. And even if it ran MCClim, it would run visibily redrawing widgets like Win32 ones under a Pentium as they did back in the day.
Thanks for the suggestion - will look into it tomorrow.
Not performance heavy here. What I do require is something that is less buggy though. Maxima has zero to no test suite so there are regularly stupid regressions that break everything.
> I love how I get downvoted for every negative comment about this rather than an actual rebuttal of the issues.
It's because you haven't mentioned any issues. Just said it's terrible and buggy. Nobody can debug that for you based on that description. If they haven't experienced the same "issues", the best they can say is "nuh uh".
Maybe I'm mistaken, but I don't think of Octave, Matlab or numpy as operating in the same space as a computer algebra system -- to me, those are all numerics oriented languages/libraries, used to obtain numerical solutions to problems vs exact symbolic expressions. They complement each other and are often used together, like Mathematica and mathics seem to support both paradigms, but they aren't the same.
When you go into battle to solve a computational mathematics problem, the problem sometimes doesn't care about these boundaries. E.g., you might think that you're solving a problem in "computer algebra", but a sufficiently fast solution might end up involving numerical linear algebra in surprising ways. (E.g., enumerating elliptic curves over the rational numbers is reduced to exact linear algebra because of a Wiles work on Fermat's last theorem, and often exact linear algebra can be done via clever tricks much more efficiently using floating point matrices, which happen to be insanely fast to work with due to GPU's...). It's thus valuable, at least for research, to have large mathematical software systems that span these boundaries.
You're correct. The engines are actually quite orthogonal (symbolic vs numerical). There's the rare cross-over, but for the most part they are used to obtain different types of solution.
Mathematica can do both, but it's much stronger in symbolic than numerics. I wouldn't try to implement large-scale numerical models in Mathematica.
It isn't that simple. Yes, Mathematica has emphasized symbolic computation since day 1, but it has also had numerical capabilities like NIntegrate since day 1 too. There's no good reason why the symbolic capabilities of Mathematica should necessarily hurt its numerical performance. In particular, Mathematica and Matlab often use the exact same libraries for many numerical operations, e.g. Intel's MKL for many matrix operations or SuiteSparse for sparse matrices. Any inefficiencies in calling out to those libraries between Matlab and/or Mathematica are usually down to a few tweaks.
In the vast majority (90% or more) of cases the performance between Matlab and Mathematica is identical or within the margin of error. And where performance does differ substantially, Matlab isn't always coming out on top. So the question then becomes whether or not your work depends on those particular use cases. In my eyes, the Wolfram Language is fundamentally superior to Matlab. And even if Mathematica/WL lags behind Matlab is some numerical aspects, Wolfram Research will have a far easier time fix or improving that deficiency than MathWorks will have improving the poverty of symbolic computation in Matlab.
Of course, the factors in practical use cases are quite a bit more complex than what I've said above. If you're working at an engineering firm that's already knee-deep in Simulink and various Matlab Toolboxes, then the choice has been already been made for you. And sometimes it's not a question of performance but one of feature parity, e.g. Compile in the Wolfram Language vs Matlab Compiler/Runtime. Also, the complexity of the Wolfram Language is not a benefit for users looking to simply write code and get a result ASAP.
If I recall correctly, the symbolic toolbox data structures were not first-class citizens in the MATLAB world (where the fundamental data structure was the matrix).
I used the Symbolic Toolbox in grad school to try to simplify equation systems, but once I got the result, I had to rewrite it in real MATLAB code.
reply