Sometimes I am wondering what if there is theory which have been on right track but it's (false?) falsified and already forgotten. Sure theory could be incomplete or incorrect on some ways but would that right part be noticed? For example I think it's too easy to imagine world where relativity or quantum theory would be socially falsified and/or left without any attention.
Simple example experience I had when I was beginning of my physic studies (which I never finished) was when discussed with elder/smarter student about wheel friction. I was explaining that I had figured out that wheel spin actually matters when there is also side slip. [Total slip direction is dependent from spin speed.] But because he -knew- that wheel spin does not matter and he -knew- that he was better/smarter/etc. he was so focused to correct my mistake I was unable to convince him. How much this happens on higher stakes?
So if situation is that there has not been much progress for a long time I think it could be valuable also understand these failed theories and of course very importantly why they are falsified.
When I am working with hard problem I usually go this order:
1. Describe the problem.
2. Describe bunch of naive solutions.
3. Describe problems in those naive solutions.
4. "Describe problems in those problems": Why some of those problems do not hold water. Those can be workarounded, fixed or they actually are not really problem in this case or maybe some combination of naive solution properties gives working solution.
For some reason I cannot reply to your comment wizzwizz4.
We are talking about dynamic friction in it's simplest form. You can treat it as simple math problem too. Let's consider two extreme cases:
A: Side slip is 1m/s and wheel spin zero or very small.
B: Side slip is 1m/s and wheel spin extremely big, let's say 1000m/s.
I think we can agree that friction is always opposite to surface speed. If wheel spin is on x axis and side slip on y:
On A case friction is (0, 1).normalized() * friction-coefficiency => (0, friction-coefficiency)
On B case friction is (1000, 1).normalized() * friction-coefficiency => [approximately] (friction-coefficiency, 0)
On classroom teacher says that slip does not matter. What teacher actually means that slip does not effect into -magnitude- of friction but this is left behind because problem is presented in context of 1D. Tho in 1D slip still matters little bit because there is difference is slip 1m/s or -1m/s.
> I think we can agree that friction is always opposite to surface speed.
This isn't intuitively obvious to me. One explanation says "must be true", another explanation says "might be false". I'd want to run an experiment with a toy car on a polished surface. Unfortunately, I'm quite a way from the nearest place I could set up such an experiment.
In another words friction slows movement down and does not treat some direction on surface more preferable than others. Assuming regular surface this is pretty much definition of friction.
I am not sure how well I have explained stuff but if you are able to experimentally disprove this it's worth of paper.
My theory is that physics went down a parallel path that leads to a dead end. The fork was too far back and nobody is willing to backtrack enough. A part of this is that almost all of modern physics takes mathematical shortcuts of dubious validity because “modern” physics was developed in the era of pencil and paper.
With computer algebra systems and numerical methods new have available to us now a lot of old assumptions ought to be revisited.
Also some theories were ignored for political or even religious reasons. Or as you said, they couldn’t fix some basic issue at the time and just shelved the theory.
Some random examples:
The Many Worlds Interpretation is one of the least “popular” but the only sane and consistent theory of Quantum Mechanics.
One of Einstein’s last collaborations was Kaluza Klein theory which has many excellent features such as smoothly integrating EM and gravitational effects. The maths was too hard at the time so it languished.
Multiple time dimensions (a variant of MWI above) were all completely ignored because one paper “disproved” their feasibility. I read that paper and it only disproved a specific subset of theory space.
Did you run the experiment? I don't think wheel spin does matter when there's side slip. It matters when there would otherwise be static friction (e.g. if you're in a car with an ABS system), but I don't think it matters when it's just kinetic friction. (Of course, there are other kinds of friction, which might behave differently. I'm no friction expert. I imagine things get weird when water's involved, though.)
At the time I was lost joy of coding too but I was able to found it again.
One key point was to ignore learning new tech if it was not absolutely necessary and focus just creating new things. I think it all started from Sebastian Lague's video which reminded how beautiful coding can be.
Excel does not support any delimeter natively since its region dependent.
I ended up saving my mental heath by supporting two different formats: "RFC csv" and "Excel csv". On excel you can for example use sep=# hint on beginning of file to get delimeter work consistently. Sep annotation obviously break parsing for every other csv parser but thats why there is other format.
Also there might be other reasons too to mess up with file to get it open correctly on excel. Like date formats or adding BOM to get it recognized as utf-8 etc. (Not quite sure was BOM case with excel or was it on some other software we used to work with )
I also use sep= annotation.
That is not documented ANYWHERE by Microsoft
I assume one of the devs mentioned this in a mailing-list sometime in the nineties and it has found its way around.
Still... Shame on Microsoft of not documenting this and perhaps other annotations that one can use for ex El.
One can start from typical UIs and start tinkering from there why it isn't good enough for programming.
Good first step is to notice that we dont have even static data objects. Still UIs are full of them (forms) but you cannot copy paste or store them as a whole, everything is ad-hoc.
Now imagine that every form could be handled like Unity scriptable object. And maybe something what prefab variants do: data inheritance.
Questions can be linkbait, but aren't necessarily linkbait. I'd say "Why isn't there any HiDPI resolution on my M1 Mac?" is a perfectly cromulent title - it tells you clearly what the topic is.
Thats propably because you are looking inlined assembly of definition. If you name and reuse all the partial patterns it become much more clear. Though regex is cool obfuscation method.
It would be than at least 3/4 of a page long I guess.
The fun part is this is not even the full truth. As the list of TLD isn't very static any more it's additionally difficult to determine whether a host name is valid. That is only possible with some dynamic list (or a regex that would grow indefinitely and ever change). The presented solution doesn't even take this into account.
The source page I've linked is a quite interesting read on that whole topic.
You propably would want to reuse referenced definitions like domain and IP which are not email specific. But yes all of our JS could be much shorter if we used APL but most of us like readability :P
I kind of not get why TLD should be validated. Does it matter anymore than if sub domain is not registered of if IP is not reachable. I think valid as potentially deliverable and actually deliverable should be distincted (like well formed XML and schema validated XML).
The TLD part matters as some part of the email format is defined through the format of a valid host name. "something.com" is a valid host name, but "something.something" isn't currently a valid host name. So an email address "something@something.something" isn't a valid email address (currently).
But at the end of the day this is all moot, imho. The "only" sane test to check the validity of an email address when someone shows you one is whether you can successfully deliver mail there.
Because even an address is formally valid doesn't mean it will get accepted by all systems on it's way. Almost nobody follows the under specified, confusing, and contradictory specs to the letter.
That was my point in the first place: Trying to validate email addresses is a rabbit hole. It's for sure everything, but not "simple", as claimed above.
The point I was making is that whether or not you can successfully deliver email is not a sensible test of the validity of an email address, looking at the address purely as data. As I pointed out, my email archive contains many email addresses that are no longer ‘valid’ by your definition, but they are still valid as data.
By your definition email address validity changes literally on a moment to moment basis. Addresses are becoming invalid constantly and new ones are becoming valid constantly. It’s not a useful definition of validity, and not even something you can test meaningfully.
I've got your point already before and I think it's valid.
That's why I've formulated my "definition" carefully:
> the validity of an email address when someone shows you one
It's of course not a "definition" someone could write down into a spec. But It's by far the best "informal validity check" in practice. It checks whether an email address is currently valid. You practically can't do more anyway!
The "formal validity" of an email address changes with time nowadays as I've pointed out: It depends directly on the formal validity of the host name part which can change over time given the fact that the list of TLDs changes over time (which wasn't the case at the time those specs have been written; fun fact: there is more than one spec, and they're contradicting each other).
To add on that there are two more important aspects: Firstly an email address you can't send mail to is mostly worthless in practice as it can't be used for its primary purpose. Secondly even perfectly "valid" addresses (by the spec) aren't accepted by a lot of parties that claim to handle email addresses! I guess a lot of systems would for example refuse an address looking like "-@-", wouldn't they? But it's perfectly valid!
My initial argument was that claiming that it's "easy" to validated email addresses is wrong in multiple dimensions. In fact it's one of the more complicated questions out there (given the tragedy of the specs).
I claim that it's possible to design cargo ships and planes to be driven by anyone with short training compareable to driving license.
It all comes down how much you want to invest for being as safe as possible from accidents. Cost of cargo ship or plane accident is very high and there is no valuable reasons(?) why everyone should be able to drive those vehicles. Therefore it makes sense that those vehicles are driven by professionals and are designed for professionals.
If your server has millions of users and it provide such value that down time is not an options, maintaining such server should be done by professionals and maintenance tools should be designed for professionals.
However thats not case for every server and cost of failing "empty" server is basicly zero (unlike empty cargo plane).
I think it would be interesting to see software designed around not centralized servers, not PCs, but PSs = Personal Servers where user data lives on their own servers and services only link and communicate between them.
Why so? In web world we represent most of our non-UI structural data in JSON. HTML is also structural data but what makes it so special that serialization format is needed? Or why won't we represent any abstract business data as HTML even if it would have similiar structure?
Therefore I think it's very understandable why people come to that conclusion so often.
Simple example experience I had when I was beginning of my physic studies (which I never finished) was when discussed with elder/smarter student about wheel friction. I was explaining that I had figured out that wheel spin actually matters when there is also side slip. [Total slip direction is dependent from spin speed.] But because he -knew- that wheel spin does not matter and he -knew- that he was better/smarter/etc. he was so focused to correct my mistake I was unable to convince him. How much this happens on higher stakes?
So if situation is that there has not been much progress for a long time I think it could be valuable also understand these failed theories and of course very importantly why they are falsified.
When I am working with hard problem I usually go this order:
1. Describe the problem.
2. Describe bunch of naive solutions.
3. Describe problems in those naive solutions.
4. "Describe problems in those problems": Why some of those problems do not hold water. Those can be workarounded, fixed or they actually are not really problem in this case or maybe some combination of naive solution properties gives working solution.