It's very important for both the compiler tools chains of go to continue working well for redundancy and feature design validation purposes. However, I'm generally curious -- do people / organizations use gcc-go for some use cases ?
I assume it will follow gjc footsteps if no one steps up for maintenance.
GCC has a high bar for having frontends added into the standar distribution, and if there isn't a viable reason why they should be kept around, they get eventually removed.
What kept gcj around for so many years, after being almost left for dead, was that it was the only frontend project that had unit tests for specific compilation scenarios.
Eventually someone took the effort to migrate those tests, and remove gcj.
It has it's niche uses, such as compiling Go for lesser used architectures. It's a bit awkward to not have full language capabilities, but it still feels nicer than writing C/C++.
> GCC Go does not support generics, so it's currently not very useful.
I don't think a single one of the Go programs I use (or have written) use generics. If generics is the only sticking point, then that doesn't seem to be much of a problem at all.
> You’re also at the mercy of the libraries you use, no?
To a certain extent. No one says you must use the, presumably newer, version of a library using generics or even use libraries at all. Although for any non-trivial program this is probably not how things are going to shake out for you.
> Which likely makes this an increasingly niche case?
This assumes that dependencies in general will on average converge on using generics. If your assertion is that this is the case, I'm going to have to object on the basis that there are a great many libraries out there today that were feature-complete before generics existed and therefore are effectively only receiving bug fix updates, no retrofit of generics in sight. And there is no rule that dictates all new libraries being written _must_ use generics.
I just used them today to sort a list of browser releases by their publication date. They're not universal hammers but sometimes you do encounter something nail shaped that they're great at.
The Nushell and Elvish scripting languages are similar in many ways. I personally find the "shell" experience better in Nushell than Elvish.
Nushell
- Bigger community and more contributors
- Bigger feature set than Elvish
- Built in Rust (Yay :-)! )
Elvish
- Mostly developed by one person
- Built in golang
- Amazing documentation and general attention to detail
- Less features than Nushell
- Feels more stable, polished, complete than Nushell. Your script written today more likely to work unaltered in Elvish a year down the line. However this is an impression. Nushell must have settled down since I last looked at it.
For "one off" scripts I prefer Elvish.
I would recommend both projects. They are excellent. Elvish feels less ambitious which is precisely why I like it to write scripts. It does fewer things and I think does them better.
Nushell feels like what a future scripting language and shell might be. It feels more futuristic than Elvish. But as mentioned earlier both languages have a lot of similarities.
I don't think it matters whether it's Rust or Go especially, for an end user tool. But it definitely matters if it's Rust/Go compared to something else like C or Python.
The language choice has certain implications and I would say Rust & Go have fairly similar implications: it's going to be pretty fast and robust, and it'll have a static binary that makes it easy to install. Implications for other languages:
C: probably going to have to compile this from source using some janky autotools bullshit. It'll be fast but segfault if you look at it funny.
Python: probably very slow and fragile, a nightmare to install (less bad since UV exists I guess), and there's a good chance it's My First Project and consequently not well designed.
Not even that matters to me: I will install from repos. It might make packagers' lives a bit more difficult in some cases but they are probably very familiar with that.
I have not really had problems with installing C (on the rare occasions I have compiled anything of any complexity) nor Python applications. Xonsh is supposed to be pretty good and written in Python, and most existing shells (bash, zsh, csh etc.) are written in C.
Amusing aside, I use fish and until I decided to fact check before adding it to the list of shells written in C, I did not realise it was written in Rust.
I don't use Elvish daily (I use fish) but writing scripts in Elvish is a great experience. The elvish executable can serve as an LSP server and that makes writing Elvish scripts a bit easier.
I don't care much for the Elvish shell experience, rather I like the Elvish scripting language. The documentation is top notch and the language evolves slowly and feels stable.
The shell prompt is also a small interface. How your shell responds to tab autocomplete, provides suggestions etc. can be quite helpful. Here I just like the way fish suggests filenames, provides an underline for filenames that exist and so on.
The language is what you write in an $EDITOR. Here Elvish scripts can be nice, succinct and powerful. I like how I don't have to worry about strange "bashisms" like argument quoting etc. Everything feels consistent.
> It just shows the mindset of its devs was a little behind the realities of the industry, or they simply didn't care about concurrency.
OCaml cared about concurrency (e.g. Lwt, Async are old libraries that provide concurrency -- they didn't need multicore Ocaml). OCaml didn't care so much about true _parallelism_ in threads until recently. Parallelism was to be obtained via processes and not threads in pre OCaml 5.0. True parallelism in threads is available in OCaml >= 5.0
Python is actually trying to go multicore too ! OCaml however beat it to the punch. The strengths of Python are elsewhere though, a topic for another day.
> Python is actually trying to go multicore too ! OCaml however beat it to the punch.
This is debating the relative finishing places of the two runners finishing last in the marathon after most of the other runners have already gone home and had their dinner.
Would you rather have a HFT trade go correctly and a few nanoseconds slower or a few nanoseconds faster but with some edge case bugs related to variable initialisation ?
You might claim that that you can have both but bugs are more inevitable in the uninitialised by default scenario. I doubt that variable initialisation is the thing that would slow down HFT. I would posit is it things like network latency that would dominate.
> Would you rather have a HFT trade go correctly and a few nanoseconds slower or a few nanoseconds faster but with some edge case bugs related to variable initialisation ?
As someone who works in the HFT space: it depends. How frequently and how bad are the bad-trade cases? Some slop happens. We make trade decisions with hardware _without even seeing an entire packet coming in on the network_. Mistakes/bad trades happen. Sometimes it results in trades that don't go our way or missed opportunities.
Just as important as "can we do better?" is "should we do better?". Queue priority at the exchange matters. Shaving nanoseconds is how you get a competitive edge.
> I would posit is it things like network latency that would dominate.
Everything matters. Everything is measured.
edit to add: I'm not saying we write software that either has or relies upon unitialized values. I'm just saying in such a hypothetical, it's not a cut and dry "do the right thing (correct according to the language spec)" decision.
We make trade decisions with hardware _without even seeing an entire packet coming in on the network_
Wait what????
Can you please educate me on high frequency trading... , like I don't understand what's the point of it & lets say one person has created a hft bot then why the need of other bot other than the fact of different trading strats and I don't think these are profitable / how they compare in the long run with the boglehead strategy??
This is a vast, _vast_ over-simplification: The primary "feature" of HFT is providing liquidity to market.
HFT firms are (almost) always willing to buy or sell at or near the current market price. HFT firms basically race each other for trade volume from "retail" traders (and sometimes each other). HFTs make money off the spread - the difference between the bid & offer - typically only a cent. You don't make a lot of money on any individual trade (and some trades are losers), but you make money on doing a lot of volume. If done properly, it doesn't matter which direction the market moves for an HFT, they'll make money either way as long as there's sufficient trading volume to be had.
But honestly, if you want to learn about HFT, best do some actual research on it - I'm not a great source as I'm just the guy that keeps the stuff up and running; I'm not too involved in the business side of things. There's a lot of negative press about HFTs, some positive.
TL;DR Your dad's PHP is no longer the current PHP. The language itself has been thoroughly modernized, made more consistent, bug free and performant. There is nothing better than a piece of software that keeps getting improved decade after decade borne out of real developer needs and experience.
- Type annotations are integrated and work well with PHP now. This results in a kind of scripty Java OOP that is more succinct and type checked ! (it is possible to write PHP without types when you want metaprogramming features also)
- Many inconsistencies in the design of the language (OOP, function naming etc.) have been resolved/smoothed out
- The language recognises the inherent problems with `null` and allows you to make type annotations that rule out null. This is my smell test for a language BTW -- golang for instance does not seem to have a good story for null.
- PHP Language developers have been really good about increasing performance year after year
- Lots of software like web frameworks and CMSes continue to be based on PHP
And -- the greatest part of the design -- PHP remains the best example of the "shared nothing" architecture. An architecture that allows you to scale your application easily.
Build something with symfony or laravel. In this space PHP is used in the modern way. Don't look at WordPress, Pimcore, WooCommerce et al. They're pretty much stuck with old PHP.
That said, you can still shoot yourself in the foot with PHP. The dark alleys are all still there.
This is a fair and valid view of looking at things. Though the trick is to not get too heavy handed. At what point does regulation become too stifling ?
The American way is to possibly put too few responsibilities on manufacturers. The European way seems to be to saddle them with just too many regulations -- possibly killing so much innovation.
One way to approach this would be to put more responsibilities on large established companies and less on smaller companies. But then the problem is that larger companies will want to arbitrage this somehow by indirectly "owning" these smaller companies with less environmental responsibilities.
This area is far more complex than we think it is.
Also what do we do about totally new materials that are thought to be benign when introduced but then are proved to have harmful effects many years later. Does the company that introduced them now have huge open ended costs and now go bankrupt ?
The solution is as always in the middle ground. Society as a whole bears some cost of cleanup (a kind of insurance policy for all companies) and companies bear some of costs.
Here in Norway, electrical and electronic (EE) goods are taxed extra and that money goes to recycling and cleanup[1].
Importers and producers are required to be a member of a approved company handling returns, like RENAS[2].
Shops selling EE goods are required to accept returned EE goods from individuals of the type they sell. So if you sell fridges you have to take my old fridge and handle it in accordance with the rules.
Seems to work better than nothing, though how well I don't know. As with all such regulations there's money to be made by skipping steps, and some do[3].
This is a good design. The company that manufactures the product does not need to be necessarily responsible for the cleanup. The cleanup is done by another company and the costs are added on customer checkout. But this is open to abuse as you mentioned -- some companies may take short cuts or the cleanup companies may become an oligopoly and charge unreasonable prices that add a lot of cost to the products.
Also, what happens if you order a product online from another company in a different country ? Does Norway still get to add tax for cleanup on these imported goods ? I would guess that this would be a powerful incentive for customers to skirt these regulations for lower prices.
> Does Norway still get to add tax for cleanup on these imported goods ?
If you import as a private person AFAIK no. Consumers have very good consumer protection on goods bought from domestic shops, so there's a strong incentive to do that rather than import.
Though all that Temu junk is another story...
But companies importing EE goods have to report to the return company they're a member of, and pay them accordingly.
Can't recall offhand if there's special "flag" on the import declaration or if they just go by HS code. And presumably they get audited on this.
IIRC it used to be more directly linked to the import declaration but they streamlined it.
> there's money to be made by skipping steps, and some do
You must be joking.
"Fifteen major car manufacturers have been fined almost €600 million by the European Commission and the British government after Mercedes-Benz blew the whistle on a cartel that fixed car recycling costs and processes." https://www.dw.com/en/eu-and-uk-fine-carmakers-millions-over...
> The American way is to possibly put too few responsibilities on manufacturers. The European way seems to be to saddle them with just too many regulations -- possibly killing so much innovation
Well that's what the European way is lol. Tax and regulate instead of focusing on the crux of the problem, which is overproduction and planned obsolescence. Any solution that uses taxes and extra charges will simply pass the costs onto the consumer.
I like the idea of putting the onus on companies to get rid of the product, but there should be a consumer onus too. Consumers should be discouraged from tossing everything to the landfill, and companies should be forced to collect the stuff they product after the lifecycle is complete. This might even drive the companies to revise their designs to use more recyclable materials.
A good way to penalize planned obsolescence would be to charge a decreasing penalty if the goods are recycled/disposed earlier. So if I return the fridge for recycling after a couple of years (bad fridge) then the company gets charged automatically 5% of the fridge cost. If I recycle after 10 years then the company gets charged zero (as an example).
Maybe instead of a charge this could be a credit. If the recycling happens after a long time the company gets a bigger payback than if it happens before. The money is collected on checkout so the company can't claim bankruptcy or low profits to make the payment.
> So if I return the fridge for recycling after a couple of years
Here in Norway consumers enjoy a 5 year warranty on products that are meant to last, and 2 years on other non-consumables.
So if my fridge dies due to a manufacturing flaw within 5 years, the store I purchased it on has to repair free of charge, replace it with an equal or better product, or give a full refund. If the product keeps breaking in the same way, the customer can demand a full refund.
And it's up to the store to convincingly argue it's not a manufacturing flaw if they don't want to do that.
This provides similar disincentive to import crappy goods.
Well there's the issue innit? You're not placing fault on the manufacturer of the shoddy goods, but on the stores, which I presume are the local distributors?
Sure, you're disincentivizing crappy goods, but then you'd also barr a strata of society who can only afford those crappy goods. While it's not as much of a problem in Norway I suppose, it is a problem in the majority of the world.
I don't think the Norway example is relevant. We are talking about a country that produces oil for more than a thousand dollars a month per person, including old people and babies. It is literally a country where EVERY person is a millionaire from oil money alone. So they can set the most failed policies and make them work.
but ryandrake's comment might be the solution to what trump/republicans/rust-belt wants:
1. employment-rate for Americans.
2. bringing back industrial capacity in US.
If large companies are forced to recycle/repair INSIDE USA, that ultimately means employment for Americans, and bringing back industrial capacity back to US.
(which could mean forcing Chinese manufacturers settings up whole industrial complexes in US...)
btw, this would be a much easier/lesser-side-effect measure than "tariff on everyone" situation
You should however discuss politics with close friends -- they probably got close to you because you both share a worldview or they like hearing your worldview (even if it differs from yours).
Closeness means more sharing. That always comes with risks and rewards.
It's very difficult for a broad based record/replay software like rr to exist for macOS in my opinion. macOS system interfaces are quite basic in terms of functionality compared to Linux and increasingly locked down.
rr uses many advanced features of Linux `ptrace`. Compare `man ptrace` on Linux with that on macOS for example and you will notice that Linux gives a lot of features to `ptrace` that macOS simply does not.
There are a large number of other features required for practical record and replay -- I dont think macOS simply provides them also.
It's probably possible to build _some_ record/replay system on macOS with constraints, restrictions, workarounds and compromises -- never say never as they say. But I don't think it can be as capable/generic as rr on Linux.
Instruments 16.3 includes a new Processor Trace Instrument which uses hardware-supported, low-overhead CPU execution tracing to accurately reconstruct execution of the program. This tool provides metrics like duration, number of cycles, and instructions retired for every function executed on the CPU. Timeline in Instruments presents execution flame graph, while detail views provide aggregate-level data like Call Tree or aggregated metrics (min, max, count, sum), divided by function. Traces can be recorded using the new Processor Trace template on supported devices: M4 Mac, M4 iPad, and iPhone 16/16 Pro. Tracing on the device requires additional configuration in the System Settings.
reply