I feel like this is because Japanese are more culturally inclined to be "nice" and cordial.
I agree it's possible to have a good custom experience without tipping but I do not agree that they don't promote better customer experience. Bonuses, tipping etc. are pretty powerful way of enticing people to be better at their jobs.
It's somewhat infuriating. You use to tip waiters/waitresses. They took your order and brought it to you. Now they ask for tips when they do none of that. Every non-chain fast food joint / coffee stand where all they do is stand at the cash register, take your order, and hand it to you, they now ask for a tip. The bakery asks for a tip. The taco stand asks for a tip.
On top of that, in SF, prices are off the chart. Went to a bread store. One loaf of bread + 1 canele + 1 coffee + 1 sandwich, $47. Got chinese dumplings. 3 people 3 plates of dumplings and 3 side dishes, $135 (with tip). That same thing would have been < $30 in Singapore, Hong Kong, Shanghai, Taipei. Even Japan
I don't get why food is so expensive in the US. You would think economy of scale would kick in at some point. I eat at home now except for being social and tbh even then I'd rather just cook for a group. A good cookbook will get you better food then almost all restaurants. Even obscure ingredients from Asia are imported or even grown in the US now. There are a few exceptions
I watch a lot of old movies (30s-40s-50s). A ton of them have a 40s-50s man marrying a 20s woman.
No idea if that was societal norm, mens fantasy, or whatever. I have to kind of believe movie execs would want the largest audience and so if 40s-50s men dating 20s women was not appealing to women of the time they'd been losing 1/2 the market. But, maybe these particular movies were the male equivalent of romance novels which seem to be written 99% for a female audience
Some examples:
The Girl Can't Help It (1956)
South Pacific (1958)
My Fair Lady (1964)
The Far Country (1954)
Casablanca (1942)
To Have and Have Not (1944)
Charade (1963)
There are literally 100s of them and I'm not recommending any of the movies above per-say, but it does stick out to me just how common that theme is of man, at least 15yrs older than woman which I believe is less common today
I wonder how many of the male protagonists in these movies are extraordinarily
or above-average rich, successful, handsome or charming.
I daresay you won't find a George from Seinfeld in there, rather a set of George Clooneys.
I don't doubt that there was an age difference, but we have/had plenty of living couples from that era and can look in the census records - were Spring/Fall marriages that common?
Afaik career of actress was considered to be dead by the time she was 40 or so for most. Younger are more sexy, basically. Male actors could play big roles later. It is not that older women did not existed in real life. They outnumbered old men in real life.
Plus, movies always operated in fashions. What was depicted was heavily limited by big studios policies and by mandated codes. These limited topics and the way things were shown.
Movies are not reality and never were. They don't depict real life now either.
There have been men in their 20s marrying women in their 50s too (especially when inheritances have been involved) but since they wouldn't get children in that case, they would be totally invisible in these kinds of genetic statistics.
Even if average age in a relationship was symmetric, men being fertile for longer would cause age at (child's) conception to be higher for men. That effect needs to be untangled before we can know just how much men preferred younger women in the past/vice versa.
There is a simpler explanation.
Those older men are known actors/stars
who have established some influence in the industry.
That or some people might buy tickets to watch movies because it has a familiar name in leading role.
The combination of the two above might be enough to explain why the older men can still play lead roles in movies.
Well, that leads to the question, why only men were famous actors past 40, which is where the arguments becomes tautological.
Because men can have attractive roles past 40 and women not, or only very rarely. I don't remember the name of the movie, but it was about greek and a female character that was 40, but the cast was not offered to a 40 year old, but a 30 year old.
Because 40 year old women are not supposed to look like 40 year olds, which is where we are back to the fantasy thing.
Biology is a real thing, though, men can become fathers with over 50, but women cannot. So I am not judging btw. But those fantasies are real and strong, even though they are taboo nowdays.
Biased age disparity in marriage is hardly a "fantasy" or even "taboo". I suspect if you looked at e.g. the top quintile of male earners, or those who married more than once, you're liable to see an even more pronounced gap over the aggregate median. Equally as interesting, how do wealthy divorced/widowed women marry afterwards?
Poking at US presidents that stood out (i.e. more than 5 years):
This shortlist includes the two oldest (Biden, Trump) and youngest (Kennedy) presidents voted into office. A relevant quote attributed to Jacqueline Kennedy[1], whose second marriage was to a businessman 23 years her senior:
> The first time you marry for love, the second for money, and the third for companionship.
Well yes, there is this image, that the successful older men, has a younger wife (and even younger affairs). But the average men has not. Hence serving the "fantasy" of the target audience.
But as far as I observe, this is frowned upon today. It still exists of course, many of the hollywood heros regulary change their 20 year old models for partner, but there is increasing criticism about it. I mean, my partner is 7 years younger and even this was met with criticism.
Criticism from whom? If it's your friends, get better friends. Other than that, it might be family - which is easily explained as jealousy (any success at all tends to get that response). Am I missing some hypothetical case where someone criticises you about the age of your partner and their opinion somehow matters?
> Well, that leads to the question, why only men were famous actors past 40, which is where the arguments becomes tautological.
It was well known that Hollywood had an age discrimination problem for women up until the mid or late 80s or so.
Like a really well known problem where women couldn’t get leading roles after a certain age. It was a well publicized thing and just kind of common knowledge.
Why would you want a 20 year old wife at age 50. The maturity gap would be almost unbridgeable. The only benefits are sexual and even then the maturity aspect might play a part. Even in my late 20s, the idea of someone who has just left their teens would be deal breaker
If a man at 50 wants to marry, it is usually because he wants kids, otherwise there is no reason to get married at that age. If he wants kids, the woman must be in the fertility age, which is 15 to 35, with the best bet in the 20-30 interval. Just biology. Yes, the maturity gap is big, but if that is the price to pay to have kids ...
As they say, if you have dirty thoughts about 14 year old girl, and want do get rid of them, just spend some time with a 14 year old girl.
Same thing with 50 year old men with 20 year old wives. I am in my early 40s and there is no way I will marry woman in her early 20s, unless she is very special. Sure, 20 year old bodies are attractive, but maturity difference is too great. The movies are fantasies, showing off women at the age of peak physical attractiveness, with just enough immaturity to make the man feel more responsible by contrast, but without any of the quirks of actual immaturity.
Having a 20 year old sex partner is another story. And many women are willing to let you live your fantasy, for a time, for a price...
Those are all before birth control and before the maturation of fields like sports, entertainment and computers where young men could command as much money and influence as old industrialists. That probably shifted cultural ideals.
Also those movies are from the days where only a handful of movies were made every year, so the lack of competition meant you could write anything and jam it down people's throats, rather than needing to follow demand.
You know this hasn't stopped, ie Tom Cruise is 60 and you won't see him having female partners above 40, that look like 30. Sure he doesn't look his age, be it genes, good lifestyle, laser treatments etc. but he always looks much older than his partners.
It happens in real life too - successful older men being with much younger females is normalized by society and a common sight. The opposite not so much though.
Top Gun: Maverick came out last year, and features Jennifer Connolly (52) as his love interest. That’s still an age gap, but she certainly isn’t under 40.
women in their 20's wanting someone close to their age is a recent phenomena and only in the western world. Go to Africa, Asia, South Smerica or even eastern Europe and wonen in their late teens to early 20's will regularly express an interest in guys in their 30's or 40's.
tldr: America is a woman's dating market but the odds are better stacked for men once you leave the country.
I don't think it will ever happen except for toy projects. If you're manipulating some small list of 10-20 objects and those object have some kind of useful visual representation then for some small use case you can possibly, maybe, design a system that could do what's shown in the demos. I'm skeptical that it scales to more complex problems with more complex data.
Bret Victor himself has made zero headway. And no, Dynamicland is not it. Dynamicland is still coded in text with no visual representation itself.
Other examples always show the simplest stuff. A flappy bird demo. A simple recursive tree. A few houses made of 2-3 rectangles and a triangle. Etc...
To be even more pessimistic, AFAICT, if you find a single example you'll find that even the creators of the example have abandoned it. They aren't using it in their own projects. They made it, made some simple demos, realized it didn't really fit anything except simple demos, and went back to coding in text.
I'm not trying to be dismissive. I'd love to be proven wrong. I too was inspired when I first read his articles. But, the more I thought about it the more futile it seemed. The stuff I work on has too many moving parts to display any kind of useful representation in a reasonable amount of time.
What I can imagine is better debuggers with plugins for visualizers and manipulators. C# shipped with its property control that you could point at a class and it would magically made it editable. You could then write a custom UI for any time and it would show up in the property control (for example a color editor). I'd like to see more of that in debuggers. Especially if one of the more popular languages made it a core feature of their most popular debugger so that it became common for library writers to include debug time visualizers
Even then though, it's not clear to me how often it would be useful.
The seminal "No silver bullet" paper states that there is no universal way representation for arbitrary programs. The alternative is making specialized tools for your usecase.
I think Tudor Girba has the most usable and real-world implementation of a Victor-like vision; moldable development in Pharo
The idea is that you adjust your development environment in real-time with default and custom widgets, tools and visualizations.
I've never really understood Victor's examples to be "examples of applications you can create if you follow my way of thinking", but more "hey, this is some cool stuff you can do with computers that you probably never even considered.".
It seems Victor's vision has a hard dependency on an environment that makes this kind of meta-manipulation very natural and easy. Basically it seems his vision is the UX extension of this lower-level DX vision:
Of course this kind of runtime freedom/power/access probably has a performance cost, aaand I have a hard time figuring out how it would work in a real-life setting.
So yeah, instead of this super-universal tinkerability what's really needed (at least in the short term) are better tools. A better strace/tcpdump, better frameworks (better batteries included in more places). More care for "post hoc" error investigation. (Yeah a 2000 line exception going back to EveFactory(AdamsRib) factory is great, but what was the incoming HTTP request? What was the value of the variables in the local scope? What was the git commit that produced this code? Etc.)
One big reason errors like that are usually unhelpful is for security. I would love it if my SQL errors printed the offending row out but that row also has PII so it can't be saved to a log. They'd need to be encrypted or stored in the database itself somehow to not ruin 10 layers of PII protections.
Not everyone has to work with PII but the general rule to not log your data to generic log or stack traces still applies to everyone. On top of that tools like languages or frameworks don't know what the data they're working with does so they default to the secure option of not writing data out on errors. If you know the data isn't and it's a common spot for errors you can have the data logged by tossing a try catch statement around the pain point in your code.
If you write an application that makes money in a manner that involves transactions from people somehow, it will be the case for you. That is in fact the majority of developers.
I can't imagine directly translating any project I've worked on to a non-code representation. But, that's only because they've been developed not only in code, but for code. I can totally see a post-code development experience that mirrors how programs work much better than a big string.
>> I can totally see a post-code development experience that mirrors how programs work
I can't, because how programs work is a projection of how computers work and how computers work is by doing math. The whole reason why we have been clawing our way up the ladder of abstraction for so many decades is that it's really hard to express "Which aircrew do I need in Seattle tomorrow morning" in terms of adding 1 to a value in a register. We invented these cool little machines that do math really fast and then made them so cheap and affordable that of course we started simulating everything that could be represented mathematically. I've been programming since 1975 and when I think back I can recall dozens of these conversations over the years. How do we free programming from code? Personally, I don't think we can. The code is all that programming and computing is. Just because we have managed to do so many things with our ultrafast calculators doesn't mean they can somehow be elevated beyond what they fundamentally are. It's like we want to somehow lift them up to be like us, and on just a little reflection that seems absurd doesn't it? You might as well expect it from a toaster or a crescent wrench.
Hm, couldn't you make the same argument about punch cards? The abstractions you talk about translate to different mediums differently. I think text/code/string is a very universal and low tech medium, but I don't think there's anything about it that would make it ideal for working with those abstractions. And, let's not forget that there's a myriad of different abstractions, which to me suggests that there might be as many different ideal mediums.
Given that there isn't some superior visual representation of math, I think it is reasonable to say there won't be one for code - at least for a while.
There are superior visual representations of math as soon as you add specificity such that you can measure the difference between representations in terms of their impact or other property. Equation coloring according to semantics stands out as an example of this. Interestingly, this example already has wider adoption in code than it does in math. As someone who has tried to format a latex paper to have coloring and has not had to do the same for colored code, I can understand why. Yet if you look at KhanAcademy as an example, the technology lifts much of the burden from doing the coloring, so they do the coloring, because it helps to highlight the key ideas.
It can be a fun exercise and illuminating to go over equations you've written down and try to translate them to colored variants. The classification task forces your mind to more deeply engage with the equation and can improve understanding.
I don't know man. You could say that engineering and architecture is just applied math, but blueprints are not math notation (and neither is code, by the way).
If we had some good way to work with graphs, code could easily be represented as such. I mean still having code inside graph nodes, but you could switch views between program flow, data flow, data structures dependency etc.
It could already be made better to what we have, but there's tons of little improvements on text already and editors are very optimized to how we work currently, so it seems like it would take enormous effort to match that.
But for example for web development, seeing visual changes live is already a norm and it was not popular back when he was doing his presentations (well meta refresh tag in the 90s, good times), plus tests runinng in the loop only on things that changed also seem like a step in a good direction.
Most of all, all languages are optimized for text. Visual representation seem to have an inherent problem that it's most intuitive to touch it and that won't get anywhere close to efficiency of a keyboard. I don't think it's not possible to solve though. We just haven't yet. We've came a long way from switches and perforated cards and there is no reason to think we will stop here.
As much as it pains me to say, it's possible that much of code will be dictated to "AI" in the future and then graph representations of what's going on start to make much more sense.
We found that the key is to distinguish between writing and reading. Writing is tied to the computation and there we want as much expression as possible, and text is often hard to replace. But whatever we write, it's just data, and data can be read in many ways, depending on the question we have about it. That is the basis of what we call, moldable development. Not only are we using views extensively (in the range of thousands per system), we find it provides a significant competitive advantage, too.
> I'm not trying to be dismissive. I'd love to be proven wrong.
I don't see a world where simulating and visualizing n-steps can ever be as performant as just having one step. Even deploying immutable structures will lead to performance penalties.
Only place it is usable is in small/toy examples where computing power to n-steps can subjectively be as fast as 1 step.
There's probably an argument for having the compiled/optimized and deployed version be the low-dimensional projection of the high-dimensional simulation, and when an error happens the developer should be able to restore (at least partially) that state in the simulation, which could help to understand the problem.
I think a Brett Victor like dev tool would be a good companion to pure functional languages, and help improve their standing.
Shared state code does not respond well to running functions in a tight loop while changing the code. Mutations accumulate and the results become meaningless very quickly.
There have been some minor attempts that may even predate Brett's work, especially around trying to run unit tests on every edit. Those tend to be slow so some people have done work on using code coverage tools to figure out which code changes affect which tests, but I suspect they ran into problems since I haven't heard about any of those in some time now.
For me, before I even get to packaging, I hit the install/environment issue with python. Python, by default, wants to be installed at a system level and wants libraries/packages to be at a system level.
That shit has to stop. The default needs to be project local installs. Node might have issues but one thing they got right is defaulting to project local installs vs python where you need various incantations to get out of the default "globals" and you need other incantations to switch projects.
However, you'll find that as with all packaging discussions there are people opposing it, because their workflow doesn't match yours and they don't want to change how they work. We, as a community, need a way to resolve such stalemates or I fear we won't make much headway
We need some person, a nice person, a benevolent person, who could some how tell other people what to do, dictate it if you will, and it would be best if they could keep up this job for the rest of his or her life. We’ll call them, the Friendly Language Uncle.
My solution to this problem has been … to stop using python as a universal tool for all things and instead use other tools purpose built for those purposes.
Unfortunately (or not) that basically means no more python. Because it is definitely a “Jack of all trades, master of none”.
Focusing on programming languages that just want to be programming languages and not also a system service makes life so much better. You end up using languages that produce self-contained, easily shippable binaries, or languages with easily embeddable runtimes, instead of trying to write code that has to somehow survive in a “diverse ecosystem”, which generally makes it overly bloated and brittle as it grows so many appendages to solve so many orthogonal incompatibilities it comes to resemble enterprise open source…
You can do this now by creating a Python virtual environment[0]. Then you can package your project with a requirements file and some instructions on its use.
I use Python VENV often, and it works really well.
Activating virtualenvs isn't necessary. In every python project I work in, I do
$ cd projectFoo
$ ./ve/bin/python whatever...
(more realistically, it's `make whatever` which then builds the virtualenv into `./ve` if needed, pip installs required packages into it, and runs the command).
Yes, I agree that it would be nice if the default behaviour of `pip install -r requirements.txt` was to install it in an isolated virtualenv specific to that project, but it's not also not like it's completely impossible magic.
This is the important difference. Scripting languages should default to examining the current directory and then its parent directory etc. to find the resources they need. Python doesn't have this default and probably can't change at this point.
The Debian dist-packages/site-packages mechanism alleviates a lot of the problems with dependency hell and clobbering from packages needing to be installed. It's a shame it hasn't been embraced by mainline Python. A case of perfect being the enemy of good. Instead we get the chaos of pyproject as the next great thing.
This PEP is meant to ensure you don't clobber your Linux distro's base environment and break key OS applications.
Also, this problem doesn't exist on e.g. Windows, where there is no OS installed python to clobber. Other folks have taught themselves to always use virtual environments for this very reason and therefore don't share your problem.
Hence, there are tutorials out there that don't talk about your problem and tools exist where the default behavior might be dangerous on your Linux distro.
No one can touch you but they can hurt you in the same way people hurt others online all the time. Verbal abuse, name calling, derogatory remarks, unwanted solicitation, etc...
Yes, VR is online, but it’s a weird embodied immersive presence version of online that can feel a bit like hanging out in meatspace. And it has a off button.
Were we talking specifically about corporate work? There's more to meetings or meetups than just your job.
That said, I think VR/AR may eventually be a way to enhance online work meetings. It doesn't seem like it's there yet overall, but eventually I could definitely see it working.
Can you elaborate on how VR would enhance meetings? I really don’t see the point myself. I don’t want to meet some avatar in a virtual room and I don’t know anybody who would.
There are tons of accounts of how you feel a sense of 'presence' in VR. The head tracking fools your lizard brain into thinking you're actually there in the virtual world.
If you're reading this site I'm guessing you have access to $400. Just go buy a Quest 2 and try it out for 10 minutes, and you'll see what I mean.
Compared to standard video calls, there'd be a much greater sense of presence. Head movement, hand movement, and eye contact (once that's standard) would increase how close it feels to IRL meetings.
Which is a solution that scales extremely poorly in a global environment with potentially billions of people you might want to mute. More so in an environment which supports anonymity, so that they can just keep creating new accounts.
Kind of like spam you can heuristically figure out who is hurling abuse at people and ban/shadowban/automute their communications to people they are attempting to abuse.
I can imagine that the same way you have a shared list of undesirables. Essentially an assholeblock instead of adblock.
This would be a good idea but people can just make new accounts. It would also be open to abuse so it would need an appeal system and some sort of judiciary.
I don't understand why reputation isn't a solution offered for spam and bad people. Why not make users on the internet build trust.
For egregious breaches you don't need to give them an appeal just ban them. For less egregious situations client side filtering like adblock doesn't get one either by virtue of people not owing you their time.
I haven't much of a clue on motion capture. But, the capture I want to do requires feet, hands, face, shoulders, arms, knees, fingers, AND the ability to do it in any position. Curl up in ball. Roll over on the floor. I want to capture more than just a head and hands upright but AFAIK there's no solution under $20k.
See user name. if I want to make pr0n, I need to be able to capture those positions. Is there anything that can do it on the cheap?
Curling up in a ball and rolling around on the floor seems like a worst case for a lot of motion tracking setups. Systems based on wearable sensors won't work because you can't comfortably roll around in them, and single-viewpoint systems like the Kinect get confused when they can't see your whole body.
Have you tried getting a bunch of Kinects and doing some kind of sensor fusion? I don't know what the current state of the software is but you can buy the hardware for <$100 each used.
WAT? I pay for porn on patreon and have for years. Maybe it's just live video porn that's banned? I have no idea. AFAIK the artists I'm subscribed to are not hiding their creations. In fact they're all over sites like pornhub, spankbang, etc with their patreon address in their videos. It's all CG or real time 3D but it's most certainly porn.