Try out Forker shorthand. You can learn it gradually, and the first steps are omitting vowels and simplifying some letters. You then progress to various abbreviations. Ultimately, it's based on the English cursive so there's nothing too exotic to learn in terms of orthography, although I guess if you are younger there is a chance you never learned a cursive style! I'm a novice but feel like it doesn't hurt readability that much, and it's quick to learn.
> Research has long shown that layoffs have a detrimental effect on individuals and on corporate performance. The short-term cost savings provided by a layoff are often overshadowed by bad publicity, loss of knowledge, weakened engagement, higher voluntary turnover, and lower innovation — all of which hurt profits in the long run. To make intelligent and humane staffing decisions in the current economic turmoil, leaders must understand what’s different about today’s larger social landscape. The authors also share strategies for a smarter approach to workforce change.
There are other benefits to Bayesian data analysis besides being able to handle limited data. There are problems with the outputs of frequentist analysis around the quantification of uncertainty. For instance, from simulation studies we know that the aleatoric coverage probability for confidence intervals of a selected confidence level varies depending on the size of the difference in plausibility between the null and alternative hypotheses. And a given confidence interval says nothing about epistemic uncertainty for this particular experiment. This can make the outputs of frequentist analysis difficult for stakeholders to utilize, whereas Bayesian epistemic probabilities are generally more easily understand by stakeholders, and can directly feed quantitative decision analysis methods.
An interesting book on adapting frequentist methods to create confidence distributions that can better express uncertainty and can optionally incorporate prior information using likelihood functions is this: https://www.cambridge.org/core/books/confidence-likelihood-p...
I’ve found that in many applications, the difference between a frequentist analysis and a Bayesian one is unlikely to make a difference in the decision making (even with UQ). I’m sure there are fields where such statistical rigor is called for (where the quality of the data is so high and accurate that the variation is in the analysis — often the case with machine data).
For everything else there’s so much error. Being to quantify uncertainty is great — it’s a signal we need to collect more and better data. But so often we have to move ahead with uncertain data.
Interestingly, in business, taking action (even if wrong) produces outcomes that are much better signals to learn from than having statistically rigorous analyses, so many times there’s a bias for action rather than obsession over analysis.
But of course in some fields being wrong is costly (like clinical trials) so I can see UQ being more useful and prominent there.
In many of my projects I have had to incorporate the knowledge/intuition of domain experts. New product launch (by self or competitor), some unseen change in the operating environment. These are event history of the 'does not repeat but rhymes' variety.
Although there may be no data collected from the time something similar happened before in history, experts can reason through the situation to guesstimate the direction and magnitude of the effect in qualitative terms.
Bayesian formulations are very handy in such situations.
>I’ve found that in many applications, the difference between a frequentist analysis and a Bayesian one is unlikely to make a difference in the decision making (even with UQ).
In that case you may find the following interesting
"Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution."
How likely is Lindley's Paradox likely to show up in practice ? well there is Bayes for that (tongue firmly in cheek).
Unless it’s a stacked analysis when results of one study depends on another, usually experts just eyeball the frequentist results and take a judgment call — that’s been my experience (not generalizing but I think in business people like doing things that are simple and easy to understand).
I think it’s definitely possible that Bayesian and Frequentist approaches give different conclusions but in practice it doesn’t alter the final decision. Analyses guide decision making but in the end decisions are made on consensus, narrative and intuition. Statistics is only the handmaiden rather than the arbiter.
> usually experts just eyeball the frequentist results and take a judgment call
Indeed, but that does not make it right or rational. Bayesian method helps keep things rational. This is pertinent because human brains are terrible at conditional probabilities.
One can always argue that data analysis is usually just window dressing and decision making is mostly political and social. Empirically you would be mostly right if you take that position. One cannot argue against that factual observation.
The more interesting question is, if the decision makers aspire to be rational, which method should they use. I have used frequentist and Bayesian methods both. I made the choice on the basis of the question that needed answering.
For example, when we needed to monitor (and alert on) a time varying probability of error (under time varying sample sizes) -- Bayesian method was a more natural fit than say confidence intervals or hypothesis tests. Bayesian methods directly address the question "What is the probability that error probability is below the threshold now, considering domain expert's opinion about how often it goes below the threshold and how the data has looked in the recent past?"
> Indeed, but that does not make it right or rational. Bayesian method helps keep things rational. This is pertinent because human brains are terrible at conditional probabilities.
I agree with you on Bayesian methods keeping things rational (consistent within a probabilistic framework).
I would say there are different kinds of rationality however: the 2 that I'm most interested in are epistemic rationality (not being wrong) and instrumental rationality (what works), and in the domain of business (but perhaps not other domains like science and math), we optimize for the latter. This is because not getting analyses wrong (epistemic) is actually less useful than getting workable results (instrumental) even if the analyses are wrong. In fact, some folks at lesswrong tried their hand at doing a startup, applying all the principles of epistemic rationality and avoiding bias, and it did not work out. Business is less about having the right mental model but doing what works. This article expands on this point [1]
The issue is in ill-defined (not just stochastic in a parametric uncertainty sense, but actually ill-defined) domains like business, the map (statistical models) is not the territory (real world) -- it's a very rough proxy for it. Even the expert opinions that Bayesian methods embed as priors -- many of those are subjective priors which are not 100% rational. Not to be cliched but to recycle an old John Tukey saying: "An approximate answer to the right question is worth a great deal more than a precise answer to the wrong question." Frequentist methods are often good enough for discovering the terrain approximately, and in business, there's more value in discovering terrain than in getting the analysis exactly right.
(that said, in these settings Bayesian methods are equally as good too, though their marginal value over frequentist is often not appreciable. One exception might be multilevel regression analysis where you're stacking models.)
> Even the expert opinions that Bayesian methods embed as priors -- many of those are subjective priors which are not 100% rational.
Of course ! Like big bang it needs one initial allowable 'miracle' and does not let irrationality creep in through other back doors.
As I mentioned earlier, I choose the formulation that suits the question that needs answering.
Not sure about the 'what works' vs 'analytic correctness'. How would even one know that something works or have a hunch about what may succeed if they have no mental model to base it upon. Often that is implicit and not sharp enough to be quantitative. Bayesian formulation helps in making some of those implicit assumptions explicit.
Other than that I think we mostly agree. For example both the formulations have a notion of a completely defined sample space, the universe of all possible outcomes. That works in a game of gambling. In business often you do not know this set.
Anyhow, nice talking to you. I enjoyed the conversation.
I remember that there was a company that would somehow print out the 3DS photos in a way that they could be seen in 3D (probably with limited viewing angles). I'm sure there's some photography term here that I'm totally unaware of. I'd love to find a way to do this.
Ok, here's a better article from CMU's SEI. See "Binary Analysis Without Source Code".
> In general, the layout used by the Rust compiler depends on other factors in memory, so even having two different structs with the exact same size fields does not guarantee that the two will use the same memory layout in the final executable. This could cause difficulty for automated tools that make assumptions about layout and sizes in memory based on the constraints imposed by C. To work around these differences and allow interoperability with C via a foreign function interface, Rust does allow a compiler macro, #[repr(C)] to be placed before a struct to tell the compiler to use the typical C layout. While this is useful, it means that any given program might mix and match representations for memory layout, causing further analysis difficulty. Rust also supports a few other types of layouts including a packed representation that ignores alignment.
> We can see some effects of the above discussion in simple binary-code analysis tools, including the Ghidra software reverse engineering tool suite... Loading the resulting executable into Ghidra 10.2 results in Ghidra incorrectly identifying it as gcc-produced code (instead of rustc, which is based on LLVM). Running Ghidra’s standard analysis and decompilation routine takes an uncharacteristically long time for such a small program, and reports errors in p-code analysis, indicating some error in representing the program in Ghidra’s intermediate representation. The built-in C decompiler then incorrectly attempts to decompile the p-code to a function with about a dozen local variables and proceeds to execute a wide range of pointer arithmetic and bit-level operations, all for this function which returns a reference to a string. Strings themselves are often easy to locate in a C-compiled program; Ghidra includes a string search feature, and even POSIX utilities, such as strings, can dump a list of strings from executables. However, in this case, both Ghidra and strings dump both of the "Hello, World" strings in this program as one long run-on string that runs into error message text.
You cite yet another article which you clearly don't understand, and whose authors have questionable understanding themselves.
This article cites CVEs of a certain type, which were especially popular in the 2021 timeframe. These CVEs do not correspond to real vulnerabilities in real executables. Rather, they are reporting instances of rust programs violating the strictest possible interpretation of the rules of the rust language. For comparison, quite literally every single C program ever written would have to receive a CVE if C were judged by the same rules, because it isn't possible to write a C program which conforms to the standard as strictly as these Rust CVEs were requiring. CVEs of this nature are a bit of a meme in the rust community now, and no one takes them seriously as vulnerabilities. They are reporting ordinary, non-vulnerability bugs and should have been reported to issue trackers.
The whole discussion about layout order is completely irrelevant. When RE'ing unknown code you don't know the corresponding source anyways, so the one-to-many correspondence of source to layout is irrelevant. You are given the layout. You can always write a repr(C) which corresponds to it if you're trying to produce reversed source. This is no different than not knowing the original names of variables and having to supply your own.
The next objection is literally that rust does not use null-terminates strings, except the authors are so far out of their depth that they don't even identify this obvious root cause of their tools failing. Again, this has absolutely nothing to do with the reversibility of rust programs, except perhaps preventing some apparent script kiddies from figuring it out.
The authors do manage to struggle to shore, as it were, by the end of the article, and somehow they end up correctly identifying their tools and understanding and the root cause of their troubles, not Rust. I take it you didn't make it that far when you read it?
With the mention of memristors, this sounds kind of similar to what Wolfram recently said about hardware for neural networks that can simultaneously be memory and compute:
> But even within the framework of existing neural nets there’s currently a crucial limitation: neural net training as it’s now done is fundamentally sequential, with the effects of each batch of examples being propagated back to update the weights. And indeed with current computer hardware—even taking into account GPUs—most of a neural net is “idle” most of the time during training, with just one part at a time being updated. And in a sense this is because our current computers tend to have memory that is separate from their CPUs (or GPUs). But in brains it’s presumably different—with every “memory element” (i.e. neuron) also being a potentially active computational element. And if we could set up our future computer hardware this way it might become possible to do training much more efficiently.
Quick intro: https://imgur.com/a/zYyON