No it can't because we check the bash the AI try to execute against a list of pattern for dangerous command. Also all commands are executed within a folder specified in the configuration file, so that you can choose which files it has access to. However, we currently have no containerization meaning that code execution unlike bash could be harmful. I do think about improving the safety by running all code/commands within a docker and then having some kind of file transfer upon user validation once a task is done.
I have not used this one yet but as a rule of thumb I always test this type of software in a VM, for example I have an Ubuntu Linux desktop running in Virtualbox on my mac to install and test stuff like this, which set up to be isolated and much less likely to have access to my primary Mac OS environment.
There are different types of nova.
This type happens when one of a close binary pair is a is a compact stellar remnant,
it already spent its fuel and is a *dense* inert planet sized mass of neutrons.
The other star in the binary pair is "not dead yet" but getting there.
Late stage stars swell up as the core loses its grip on the outer layers.
In this case the binary pair is close enough that the outer shell the red giant is loosing
can be captured by the white dwarf. It is snowing star dust!
This material landing on the white dwarf is the ordinary (light-ish) matter
found in the outer layers of stars.
The ball of neutrons that the star-snow is falling on, does not care.
When the star-snow lands it settles into a (nigh) perfect sphere.
So no mountains, moguls nor mole hills on neutron stars everything will be perfectly smooth.
This keeps up until the star-snow is coating the entire planet sized ball of neutrons
to a depth of maybe six feet at which point the pressure of its own weight
causes it to spontaneously undergo fusion.
Rapid fusion of the top layer of mass over an entire planet sized ball does goes brrrr.
And again, the ball of neutrons does not care that its entire surface just became
an atomic explosion.
This process gets used because we can calculate allot of things.
We can bound about how big the ball of neutrons could be,
smaller it would not form and bigger it would be a black hole.
We can bound how much matter could pile up before it has to go boom.
We can calculate what an atomic explosion with X amount of matter should look like.
So these explosions become sign posts, first because we can see them in other galaxies
and the compare what is seen/measured with predictions which gives things like
how far away it would have to be to seem X bright,
or what must be in the way for this part of the spectrum to be wonky.
This particular one is predictable because it is close enough we can have a pretty good estimate to how fast the star-snow is accumulating,
and how deep it can get before it pops off, again.
To add to this, White Dwarfs consist of electron degenerate matter, not pure neutrons. They have not yet reached the point where the electrons and protons fuse to become neutrons. Essentially the atoms have been compressed to the point where the only thing keeping the star from collapsing further is a quirk of quantum mechanics and the Pauli exclusion principle.
In addition, when the White Dwarf accumulates too much mass (1.4 Solar Masses), it will cause a Type Ia supernova where the entire star will explode leaving behind nothing.
I liked your comment! It was well thought out and typed. I just wanted to add a couple details I thought were really neat. The idea that stars can be destroyed at all blew my mind when I first learned about it. I always love to talk about it!
White Dwarves are probably the only non-mainline stars in the Universe we know a lot about. Neutron Stars and Black Holes are a lot more of a mystery because of our trouble reconciling Gravity and the Standard Model. This makes White Dwarves a lot more predictable than their more massive cousins. So we can actually simulate a lot of what they do.
However, in this case, we can observe the star slowly getting dimmer which is a pattern that has occurred in the past, a year before the star was about to do something. The process only takes about 100 years with a possible indicator a year in advance of the event.
I never understand this argument, we have lots of space on this planet for humans, it’s the industrialization that’s harming the planet not more people
40% of all bioavailable nitrogen produced on this planet is produced industrially. Most human beings exist only due to industrialization, they would not exist if industrialization wasn't a reality. Feeding people and having people cannot be separated. You can't have an earth with this many people without industrialization, short of razing all forest land and farming on it, which seems to me to be much more destructive than industrialization.
Industrialization is done in support of human populations. As populations get bigger more and more space must be industrialized.
Your existence (eating, drinking, living) requires X acres of farmland, Y acres of ocean, Z acres of landfill, N tonnes/watts of energy, etc etc etc. No matter how "ethical" you eat or how much housing density you can tolerate, the resources much be spent or you will die.
I suppose it is true that we could all stand shoulder to shoulder connected to IV drips like some kind of sci-fi nightmare (https://en.wikipedia.org/wiki/The_Mark_of_Gideon) but I view this as a bad outcome.
This is pure speculation, but I wonder if endocrine disrupters present in plastics are leading to a decline in testosterone and therefore decline in fertility (I also wonder how those disrupters affect women)
How is your explanation simple and where is the data to back that? This theory isn’t crackpot, global sperm rates are declining, the average testosterone of men is lower than it was during your grand-fathers time. Certain plastics have been shown to be endocrine disrupters.
Your theory might work in westernized countries but this trend is global according to the article, certain countries enforce more traditional values for men and women and those countries are also seeing a decline.
I'm a novice, but from the research I've done, the evidence is far from conclusive. There is growing evidence of the negative impact of microplastics on human health in a number of ways, but it is a massive leap to claim it is the primary cause for the phenomena you're describing.
In fact, the biggest contribution to declining birth rates is people have fewer children, not men being incapable of having children. And there are plenty of great sociological explanations for that. Changing gender roles, economic mobility, access to birth control, etc.
Edit: As somebody else said, it's a birthrate crisis, not a fertility crisis. "Fertility" is a loaded and inaccurate framing.
PSS: Even crisis is loaded. It just leads to people channeling their existing personal insecurities into large scale social phenomena.
Projections and forecasts for entire sectors use the data to make decisions, for example China built so many homes in projections of population / census estimates, their source data turned out to be incorrect that’s why you see a situation like Evergrande.
Incorrect, as common sense would dictate lots of people who work at Apple live close by the office (homes aren’t the only option, there is a thing called renting)
A well known very large successful trillion dollar company emphasizes this heavily and it’s called identifying Cause By, regressions are categorized before a build is submitted, in the current build, or in previous builds. It’s extremely important to identify cause by (commit) in order to either revert or fix.
Sounds interesting. Could you elaborate, I don't quite understand it. You test for regression errors before commit? But why commit with known regression errors?
As a previous comment said, this sounds like a generalization of regression testing. This article seems to focus on the small scale and omits a couple points relevant to larger scale systems:
- data dependencies cause regressions, so you. need some way to factor this in
- production changes cause regressions
- debugging at the source code level can be laborious past a certain point
The way I approached doing this at another (maybe same) T$ compnay was tooling that could take anything observable -- e.g. logs, program traces, signals from services, service definitions, data, stack traces from debuggers for side-by-side running -- convert it to a normal form (e.g. protobuf or JSON) and then diff that to look for regressions.