That is a nice method to compute determinants of matrices.
I wish they taught this method when I was in high school.
As John D Cook points out, the fact that the core 2x2 matmul operations that can be done in parallel is a big benefit for doing this in software or hardware.
What if there is no KYC involved? Based on my knowledge, if the transactions are 100% crypto and they don't interact with fiat, then it's considered a merely web3 project.
A words of warning: you can always add enough what if’s to change someone’s opinion eventually, but that’s still not going to bypass their original complaint for everyone else.
Fiat is going to enter the picture somewhere, like when you want to cash out the crypto you’ve been paid to in turn pay your employees and server bills. At that point you might start getting asked questions on where you got the funding that’s been traced back to illegal activity.
The general guidance is that you segment your application domain into two categories - Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP).
The OLAP data is populated from the OLTP data using queries (snapshot tables, materialized views etc., could be the implementation).
You then add/refresh data into the OLAP tables in a set frequency (for eg: daily, weekly, bi-weekly, monthly, quarterly, yearly etc.,).
The OLTP system has up-to-date realtime transactions. The OLAP data has snapshots as of a particular date. The OLAP data may be denormalized while the OLTP data is highly normalized. This makes the OLAP data optimized for reads while the OLTP data is optimized for writes.
> Biggar also contends that British rule in India, initially under the auspices of the East India Company (EIC) from the 1750s and direct colonial rule after 1857, was far from the rapacious affair that Whigs at the time (Burke springs to mind) or later historians, like Theodore Dalrymple assert.
That is a tall claim, if there ever was one.
> EIC officials like Ernest ‘Oriental’ Jones and Warren Hastings showed a profound interest in Hindu culture and went to great lengths to accommodate Indian custom to utilitarian understandings of law and property. Biggar suggests that, Edward Said, the author of the 1978 book Orientalism which spawned post-colonial discourse theory and decolonise campaigns in education, distorted the character of European and British interest in both India and China.
Warren Hastings presided over the Great Bengal Famine of 1770 and reported back to the EIC about the wipeout of 10 million humans in Bengal.
Damodaran, Vinita (2014), "The East India Company, Famine and Ecological Conditions in Eighteenth-Century Bengal", in V. Damodaran; A. Winterbottom; A. Lester (eds.), The East India Company and the Natural World, Palgrave Macmillan UK, pp. 80–101, 89, ISBN 978-1-137-42727-4, writes:
> Before the end of May 1770, one third of the population was calculated to have disappeared, in June the deaths were returned as six out of sixteen of the whole population, and it was estimated that 'one half of the cultivators and payers of revenue will perish with hunger'. During the rains (July–October) the depopulation became so evident that the government wrote to the court of directors in alarm about the number of 'industrious peasants and manufacturers destroyed by the famine'. It was not till cultivation commenced for the following year 1771 that the practical consequences began to be felt. It was then discovered that the remnant of the population would not suffice to till the land. The areas affected by the famine continued to fall and were put out of tillage. Warren Hastings' account, written in 1772, also stated the loss as one third of the inhabitants and this figure has often been cited by subsequent historians. The failure of a single crop, following a year of scarcity, had wiped out an estimated 10 million human beings according to some accounts. The monsoon was on time in the next few years but the economy of Bengal had been drastically transformed, as the records of the next thirty years attest."
Post JS build pipelines and web packers, most element names and classes are just minified garbage.
Sadly most web devs don't give a damn about accessibility anymore :(
Especially not in the React and Angular based ecosystems and toolchains. Server-side rendering was popular for a while, but even then the generated HTML codes were pretty useless for A11y focussed products.
Yes. The name/id choice recommendation has been evolving since the original HTML spec. For modern browsers (in 2023), you are right about the id attribute. Some very old browsers needed the name anchors.
The book covers it in Appendix A.6 (p 424) in the v2023.06.11a PDF file.
> A.6 What is the Difference Between “Concurrent” and “Parallel”?
> From a classic computing perspective, “concurrent” and
“parallel” are clearly synonyms. However, this has not
stopped many people from drawing distinctions between
the two, and it turns out that these distinctions can be
understood from a couple of different perspectives.
> The first perspective treats “parallel” as an abbreviation
for “data parallel”, and treats “concurrent” as pretty much
everything else. From this perspective, in parallel computing, each partition of the overall problem can proceed
completely independently, with no communication with
other partitions. In this case, little or no coordination
among partitions is required. In contrast, concurrent computing might well have tight interdependencies, in the form of contended locks, transactions, or other synchronization mechanisms.
> This of course begs the question of why such a distinction matters, which brings us to the second perspective,
that of the underlying scheduler. Schedulers come in a
wide range of complexities and capabilities, and as a rough
rule of thumb, the more tightly and irregularly a set oparallel processes communicate, the higher the level of sophistication required from the scheduler. As such, parallel
computing’s avoidance of interdependencies means that
parallel-computing programs run well on the least-capable
schedulers. In fact, a pure parallel-computing program
can run successfully after being arbitrarily subdivided and
interleaved onto a uniprocessor. In contrast, concurrent computing programs might well require extreme subtlety
on the part of the scheduler.
Well, funnily enough this does read in contrast to the definitions used in Wikipedia, which are the ones I am also familiar with (I also do teach a class called "Parallel Programming" to graduates).
I do think the differentation make sense from a perspective of problem classes, as also evident from the comments here. Running independent problems in parallel to better utilize hardware ressources is very different from running problems in parallel in timesteps that have strong dependencies in regards to progress of the overall computation. And that's not a problem of the scheduler, but a much more general concept.
It doesn't sound to me like the author has the whole web service parallelism/concurrency in mind that is very apparent in the comments here.
The definition found in Wikipedia, like many contentious subjects in programming, was written by people with strong political agenda and very little respect to the matter being described.
This applies to all sorts of ambiguous terms used very generously in the witchcraft of "applied computer science". Other examples include "object-oriented programming", "statically- or dynamically-typed language", "interpreted language", "dependency inversion", a bunch of "software patterns" and more. All this terminology is meaningless because there's never a way to tell if a language is object-oriented or not, if it's statically-typed or not and so on. Parallel vs concurrent is just one of those things where emotional attachment won over any attempt at rational thinking.
Uhh, I think it is pretty commonly accepted that a statically-typed language has typechecking facilities before runtime and a dynamically-typed language doesn't. Maybe there is a spectrum and things somewhere in the middle gradual typing but the general idea is quite clear.
1. That's not part of the language, it's part of an implementation. I.e. it means that the same language can be both statically-typed and not (according to your "definition"). Which is literally nonsense (in the sense true = false).
2. If something is commonly accepted doesn't make it anymore true. In the context of programming, a lot of commonly accepted beliefs are nonsense, this one isn't an exception.
It's not about spectrum. It's about inability of a lot of people to critically assess information coming from otherwise reputable sources.
What are you talking about? Languages are pretty explicitly designed to be statically- or dynamically-typed: e.g. take Python which relies a lot on having a dynamic typing discipline. Yes, you have Mypy and pytype, but those are pretty much different dialects.
Also, what reputable sources are you even talking about? That is also such an unwarranted personal attack you've attached as well.
I think there is a point here if you ignore the grumpy old man delivery.
For example, even python which is dynamically typed, is strongly typed also, so you now have to not just know the distinction between statically typed and dynamically typed but also between strong and weak. Then stray away slightly from vanilla python interpreter/runtime to any of the other flavors and you now have to reason about compile time, runtime, interpretation time, bytecode, interop with jvm etc. So there is a point that the language is a way of expressing something and the implementation is where the devilish details lie. You can argue jython isn't python or whatever and that's all well and good, but you can't really discuss all of this with other reasonable humans without getting into the details of implementation. Sure, for a leetcode level of understanding it doesn't matter much, but try to do something sufficiently complicated like build an os extension in python that interops with your c++ based api and you'll have to think about the implementation of the projections, and then port it to arm and you'll have to think about it all over again.
> though the driver must remain behind the wheel to take over when prompted
That is a modal interface and the system may also decides to switch modes at will. Anytime a human is expected to wakeup from a mode and takeover from an automated system on short notice, we have failure modes that are unique compared to a modeless system (including Full Human Drive )
This is probably going to be even less successful than the Apple watch in terms of adoption. $3.5k for a personal device -- perhaps it will capture a niche. Genre defining like the iPod or iPhone, this isn't going to be.
The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity takes actions on behalf of another person or entity.
The same issues occur with self-driving cars where it is expected that the driver take over from the automation anytime (eg: driver wants to stop but AI wants to go or vice versa).
> We believe that the main reason for this incident is the proprietary nature of iOS. This operating system is a “black box”, in which spyware like Triangulation can hide for years. Detecting and analyzing such threats is made all the more difficult by Apple’s monopoly of research tools – making it a perfect haven for spyware. In other words, as I’ve often said, users are given the illusion of security associated with the complete opacity of the system. What actually happens in iOS is unknown to cybersecurity experts, and the absence of news about attacks in no way indicates their being impossible – as we’ve just seen.
Shatters Apple's argument that all of these hurdles are better for security. I wonder if testimony like this could affect any of their antitrust lawsuits or right to repair lobbying.
Not "shatters", as while it is a valid counter, it doesn't tell you the relative strengths and weaknesses of the two approaches, only that Apple isn't perfect which should already have been assumed.
A stronger counter to Apple's argument is the relative pricing of exploits… but the story I'm remembering is old enough that I don't want to just assume it's still true, even though it's near the top of my search results:
It doesn't really shatter anything does it? People here are going to understand that there are trade-offs to every decision made.
I suspect iOS is not worse than the more open Android simply because senior management at Kaspersky are using iPhones. If anybody is choosing their platform with security in mind, it has to be them and they are going with iOS.
And on that same page it says the Android version didn’t even require an exploit. The sneakiest thing that was required on Android was to write the word “Samsung” on the app icon so that users would click it.
Near the end, they say:
> This campaign is a good reminder that attackers do not always use exploits to achieve the permissions they need.
> It's absurd to say a company should not blow the whistle on a sophisticated attack when that companys job is just that!
They should definitely do it.
They should also acknowledge that they did a shoddy job. They let the malware run unchecked for several years. It is clear that the safeguards they had in place did not work, not for protection, but especially for detection.
Instead, they chose to boost the image of their own products and bash a third party vendor with a questionable reasoning.
> Shatters Apple's argument that all of these hurdles are better for security.
Sorry I don't buy that this "shatters" anything besides peoples misguided assumptions that anything can be perfectly secure without being fully disconnected.
Apple's iOS 16 supports iphone 8 which was released in 2017, 5 years ago.
Apple's iOs 15 supported iphone 6 which was released in 2015, 7 years ago.
> Samsung’s previous promise to provide three years of upgrades and ensures millions of Galaxy users have access to the latest features for security, productivity, visual experience and more, for as long as they own their device.
> Samsung will now provide up to five years of security updates to help protect select Galaxy devices
They do mention 5 years of updates but only for _select_ galaxy devices (presumably the top of the line).
---
I am assuming anyone rooting/flashing is taking way more risks and security concerns into their own hands. But in length of support/security updates alone apple is winning.
I also wonder how long it actually takes a vulnerability patch (let's say for a zero day) to get out on android and then through OEM security updates. (I haven't been android in too long to know this.) Apple actually just released a way for them to do this and have already used it once, they call it "Rapid Security Responses" (which you can switch off although idk why you would).
Because they were deceived by Apple's quality promises?
If Apple really wanted to improve security (instead of just producing marketing claims about it) they would provide anyone with debugging symbols, root privileges and anything else needed for research and debugging.
It's entirely rational to have believed iPhone to be more secure in the past, now believe Android is more secure, and yet remain on iPhone:
1. At some point, weigh probabilities of exploits
2. Update Bayesian priors as new evidence arrives
3. Even if the initial decision currently appears incorrect, there needs to be a high enough difference in probability to justify switching, because in switching, you're still exposed to any persistent exploitation via the old exploits plus new exploits on the new platform
Switching back and forth the instant your Bayesian prior swings over/under 50% for Android being more secure than iPhone is a terrible strategy. (Also, you need to risk-weight your various exploit probabilities... security is a multidimensional quantity, so collapsing to a scalar is at least context-/threat-model-dependent.)
So, they discover a vulnerability in ios and publish the details of the symptoms of the exploit -- something that Apple themselves were unaware --, release a tool to detect indicators of compromise in iphone backups and yet, somehow they have poor judgment?
What should they be doing? Keep the discovery to themselves so those who claim iPhone is secure can continue living obliviously with their worldview unchanged? Wouldn't we accuse them of poor judgment if they did that?
It is quite reasonable for them to say the ecosystem being closed is making analysis and detection difficult. It is up to Apple to do what they want with that information.
If I'm understanding the GP correctly, they're asserting that any "real expert" would have anticipated being exploited on iPhone and would never have used iPhone.
I can see this point of view, but I feel expertise is more about skill in acquiring information and updating beliefs. In my view, real experts can be blatantly wrong, even about foundational facts, if they have an exceptional ability to update those beliefs.
No expertise is needed to say any os/device is likely to suffer an attack/exploit. Anyone who says that for any platformwill be right with a probability of 1.0
It issue is that their claim that the cause of the exploit is the propriety OS, is both not plausible (because otherwise Android would be far more secure than iOS), and is inconsistent with their alleged expertise.
It’s entirely possible that they are experts, but are making making a claim that is not based on their expertise, for reasons of political and marketing expediency.
They knew all along it was closed source, but that doesn't mean they believed all along (or at least were confident enough in their belief) that closed source resulted in higher risk of extant exploitable flaws.
Sure, I think a lot of people would think about it this way - but that just means they don’t have any real expertise.
Kaspersky says:
“We believe that the main reason for this incident is the proprietary nature of iOS.”
If the proprietary nature is the main reason for the incident, then Android should have been overwhelmingly more secure all along, and they should know this.
If they are only just figuring this out now, then they have been ludicrously ignorant for people who claim to be experts.
Occam’s razor says they really aren’t as expert as their marketing claims and they are trying to save face by blaming Apple.
Given that the Kremlin is blaming Apple and the NSA, perhaps Kaspersky is trying to deflect blame for not having warned Russian diplomats about the issue.
I feel this is likely going to devolve into a semantic argument over the true definition of real expertise. A key sticking point will likely be volume of a priori knowledge vs. skill in acquiring and synthesizing knowledge.
The issue is their claim that the cause was the proprietary nature of iOS.
This is inconsistent with their claims of expertise.
That’s the issue. I believe the claim isn’t being made because they are experts or because it is true, but rather to deflect blame for marketing and political reasons.
I guess everyone at Kaspersky knew the risk of an attack was non-zero given their industry profile. Their SIEM finally caught it, albeit it is arguable if the detection was timely and as others in the thread have pointed out, their MDM should have detected the upgrade failures or version issues. We will probably hear about it in the detailed paper/presentation later.
Their rant on the closed nature of the ios ecosystem is more around Apple's hold on the research tools. That is what I took from the statement, among other things.
Actually, Apple should consider making iMessage open source.
Given it is such a popular attack vector, it probably benefits the ios ecosystem to take the benefit of open source scrutiny. There are other messaging apps like Signal, WhatsApp, Telegram etc., So, it is not like a copycat would suddenly emerge and threaten Apple's position. Apple hold the keys to the app store anyway and can review any potential copycat (supposedly malicious one) and prevent it from being released.
Right you can turn off getting any messages entirely and deregister your phone from their network. I believe what I was remembering was you can't swap out the primary SMS receiving app like you can on Android. Unless something changed. Not everyone like's to live in a security bubble w/o phone access, even the security minded.
There is a switch in the Settings app to disable iMessage and just use SMS. This is an option for the built in messaging app, no need to “swap” or install another app.
So basically still using iMessage software just for SMS? I guess this could provide some better sense of security given the parsers are the main issue.
That one of the bigger security companies seemingly didn't have MDM screaming bloody murder or outright blocking authentication for an endpoint this out of date is more than a little concerning.
Props to their SIEM for detecting it in the end, but this seems like it could've been detected and remediated a few weeks in (assuming it didn't also have the ability to spoof the iOS version).
> We identified that the latest version of iOS that was targeted by Triangulation is 15.7. However, given the sophistication of the cyberespionage campaign and the complexity of analysis of iOS platform, we can’t guarantee that other versions of iOS are not affected.
Indeed, the identified fix involves a factory reset and upgrading iOS to prevent the malware from taking over again.
That provides a simple explanation for why the phones are running such an old version: because they've been infected and unable to be updated for that entire time.
I guess execs at security firms are no better than average people when it comes to noticing that their phones never got the various new features (end emojis!) from the last year of OS updates.
> We have developed and made freely available the triangle_check utility, that can detect indicators of compromise in an Apple device backup. Detailed instructions on how to use it under different OSs (Windows, Linux and macOS), as well as how to create a device backup can be found in a post on Securelist. [1]
As John D Cook points out, the fact that the core 2x2 matmul operations that can be done in parallel is a big benefit for doing this in software or hardware.