What you are referring to is called capillary condensation [1]. When you have a hydrophilic surface with thin capillaries or small pores, they can pull water from the air below 100% RH. However, this process requires an enclosed space with a very small radius and the air-water interface is always concave in this case (it's just how capillary forces work).
Forming a convex surface, on the other hand, requires an at least slightly hydrophobic material and produces a positive internal pressure. This is a key difference, because condensation into a hydrophilic pore is favorable in terms of free energy, while condensing onto a hydrophobic surface is unfavorable (unless you have a supersaturated vapor).
> Theoretically speaking, you can have a material that somehow absorbs high moisture from the air but has microscale properties that promote creation of droplets then somehow these droplets are separated from the rest of the air
That "somehow" is what makes the paper's claims impossible. The water condenses spontaneously into the pore because it thereby lowers its free energy. Extruding it onto the surface is then even more unfavorable than direct condensation. Unfortunately, no passive system can achieve this feat, no matter how cleverly nanostructured, as it would go against the arrow of increasing entropy. You need an external energy source to drive that process.
I use AI to help my high-school age son with his AP Lang class. Crucially, I cleared all of this with his teacher beforehand. The deal was that he would do all his own work, but he'd be able to use AI the help him edit it.
What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.
This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.
For people wanting to dig into this idea some more, I'd recommend the book by Austin Kleon called "Steal Like An Artist." Also, there is some nuance in the book about copying and stealing, without being a thief.
There's a quote I learned when doing theatre, which I've seen attributed to either the stage magician Doug Henning or possibly Stanislavski, describing the process of art as taking something that's difficult and making it habit, then taking something that's habitual and making it easy, and then taking something that's easy and making it beautiful.
For example, as an actor, you learn your lines by rote (they become habit), then you gain an understanding of the character's motivations (remembering the lines becomes easy, because of course that's what your character would say), then you work to tune your performance so the audience shares in the emotion and unspoken meaning of the lines (that's beautiful/art).
As this relates to software, I think it goes something like: you learn the magic incantation to make the computer do what you want (solving a hard task becomes habit), then you learn why that incantation works (solving it becomes easy), then you figure out better ways to solve the problem, such that the original friction can be removed completely (you find a more beautiful way to solve it).
> Cut threads into printed parts with a thread tap for quick design of low-reuse joints.
I've found wood screws work well for this. The wood screw can cut its own threads without needing to use a tap.
It does put some stress on the part, though. I mostly print in PETG, which is strong enough; but PLA might split if the hole was parallel to the layers.
> A design limitation of threaded inserts is that they are not reliably usable for screws inserted from the back side. During insertion, heat-set inserts often push some molten plastic into the hole beneath them, preventing easy insertion of a screw from the back side.
A trick I sometimes use:
1. Before installing the insert, insert the screw from the back side
2. Screw the insert onto the protruding screw
3. Use a soldering iron to install the insert+screw together into the plastic
Because the screw is filling the hole, the molten plastic can't block the hole. Instead, the molten plastic forms itself around the screw, and it acts like a Nyloc nut.
This was an interesting connection to me between meditation and neuroscience. Buddhists talk about the "monkey mind" that chatters incessantly. Well, that's the default mode network, part of your brain that is active when you're not engaged in a specific task, when you're thinking about self, others, past or future. A useful adaptation in our past environment for sure, but overactivity can be detrimental. The Buddhist solution is to mediate, to focus the attention on a singular thing and not be distracted by the chatter. That ability lives in the prefrontal cortex! It's able to override the DMN and it's something that can be trained by just exercising it.
What we suffer from to-day is humility in the wrong place. Modesty has moved from the organ of ambition. Modesty has settled upon the organ of conviction; where it was never meant to be. A man was meant to be doubtful about himself, but undoubting about the truth; this has been exactly reversed. Nowadays the part of a man that a man does assert is exactly the part he ought not to assert—himself. The part he doubts is exactly the part he ought not to doubt—the Divine Reason. Huxley preached a humility content to learn from Nature. But the new sceptic is so humble that he doubts if he can even learn. Thus we should be wrong if we had said hastily that there is no humility typical of our time. The truth is that there is a real humility typical of our time; but it so happens that it is practically a more poisonous humility than the wildest prostrations of the ascetic. The old humility was a spur that prevented a man from stopping; not a nail in his boot that prevented him from going on. For the old humility made a man doubtful about his efforts, which might make him work harder. But the new humility makes a man doubtful about his aims, which will make him stop working altogether.
This is so needed. This was a very encouraging article.
"Being a fan is all about bringing the enthusiasm. It’s being a champion of possibility. It’s believing in someone. And it’s contagious. When you’re around someone who is super excited about something, it washes over you. It feels good. You can’t help but want to bring the enthusiasm, too."
Stands in contrast to the Hemingway quote: "Critics are men who watch a battle from a high place then come down and shoot the survivors."
It feels socially safe, easy, and destructive to be a critic.
I kinda went down a rabbit hole a while back with certain treatments that can kill adipocytes, as there's actually some significant research backing both heat-generating and cold-generating treatments. They do kill fat cells, and they are flushed out of the body. But people who undergo such treatments do not lose fat. At best, these devices can reshape your fat, pulling it out of one area and distributing it more evenly in other areas.
The problem is that when you kill an adipocyte, it releases all of its triglycerides, which are then free to move around the blood stream. But when blood triglyceride levels are high and there isn't significant oxidation, other metabolic processes are triggered to start to store them. So you kill an adipocyte, release the triglycerides, which get reabsorbed into still living adipocytes, which now get engorged and then multiply again, replacing the fat cells that have been killed.
After learning quite a bit about these processes, I think these devices might actually be useful, not for losing fat, but by eliminating this sort of fat memory. In other words, they should be used after significant weight loss, because adipocytes are relatively empty and externally triggered apoptosis can kill the cells without releasing significant quantities of triglycerides which can be reabsorbed and trigger adipocyte mitosis. I think this would effectively reset that person to a state as if they had never been fat in the first place. Thoughts?
> a bit like trying to explain a vacuum cleaner to someone who has never seen one, except you're only allowed to use words that are four letters long or shorter.
> What can you say?
> "It is a tool that does suck up dust to make what you walk on in a home tidy."
I like the axiomatic definition of entropy. Here's the introduction from Pattern Recognition and Machine Learning by C. Bishop (2006):
> The amount of information can be viewed as the ‘degree of surprise’ on learning the value of x. If we are told that a highly improbable event has just occurred, we will have received more information than if we were told that some very likely event has just occurred, and if we knew that the event was certain to happen we would receive no information. Our measure of information content will therefore depend on the probability distribution p(x), and we therefore look for a quantity h(x) that is a monotonic function of the probability p(x) and that expresses the information content. The form of h(·) can be found by noting that if we have two events x and y that are unrelated, then the information gain from observing both of them should be the sum of the information gained from each of them separately, so that h(x, y) = h(x) + h(y). Two unrelated events will be statistically independent and so p(x, y) = p(x)p(y). From these two relationships, it is easily shown that h(x) must be given by the logarithm of p(x) and so we have h(x) = − log2 p(x).
This is the definition of information for a single probabilistic event. The definition of entropy of a random variable follows from this by just taking the expectation.
Not guessing is perhaps the most important thing to the business.
I developed a lot of my problem solving skills in semiconductor manufacturing where the cost of a bad assumption tends to be astronomical. You need to be able to determine exactly what the root cause is 100% of the time or everything goes to hell really fast. If there isn't a way to figure out the root cause, you now have 2 tickets to resolve.
I'll throw an entire contraption away the moment I determine it has accumulated some opacity that antagonizes root cause analysis. This is why I aggressively avoid use of non-vanilla technology stacks. You can certainly chase the rabbit over the fence into the 3rd party's GitHub repo, but I find the experience gets quite psychedelic as you transition between wildly varying project styles, motivations and scopes.
Being deeply correct nearly all of the time is probably the fastest way to build a reputation. The curve can be exponential over time with the range being the value of the problem you are entrusted with.
That would be a very valuable lab, IF students hadn't been explicitly trained in opposite behaviour for a decade by then.
I lived a very similar experience:
My 4th year computer science professor in software engineering assigned us a four-phase programming assignment for the semester.
My teammate and I spent several sleepless days on the first assignment, and felt some of the requirements were contradictory. Finally we reached out to the professor, and he formally clarified the requirements. We asked him, "well OK, if requirements are unclear, what are we as students supposed to DO?!?" and he answered - exactly what you did; ask the user/client for clarification. "OK, but what if we hadn't, what if we just made assumptions and built on those??". And his eyes twinkled in a gentle smile.
My team mate and I had worked in the industry as summer students at this point, and felt this was the best most realistic course university has offered - not the least because after every phase, you had to switch code with a different team and complete next phase on somebody else's (shoddy, broken, undocumented) code. This course was EXACTLY what "real world" was like - but rest of the class was trained on "Assignment 1, question 1, subquestion A", and wrote a letter of complaint to the Dean.
I understood their perspective, but boy, were they in for a surprise when they joined the workforce :)
I found when I was working on my dissertation that I struggled make progress on the actual writing. No matter how much work I did, I wound up rewriting, editing, re-thinking, deleting, and starting over.
I never fully solved that problem, but I found a workaround that helped: I started writing notes and drafts by hand in cursive with my non-dominant hand.
Writing by hand, by itself, calmed me and focused my mind, whereas writing in a word processor almost always caused a spiral of distraction and increasing agitation.
Increasing the difficulty of the literal, physical writing process helped me, I think, in a few ways. It became much costlier not to commit to a single version of a thought, so I had a strong incentive to pare away some of the noise surrounding it and state it in its most direct, least objectionable form.
I'm also convinced, though I can't prove it, that dramatically slowing the physical act of writing improved my working memory.
That being said, I strongly agree with just about everything this piece says, whether or not one writes by hand. And I would add that writing also forces people to use a wider range of faculties and forms of reasoning. I doubt one could overstate the value of this as intellectual exercise.
> If you’re repeatedly drawn to a thought, feeling, or belief, write it out. Be fast, be sloppy.
I couldn't agree more here! A friend has wanted to start a writing/journaling habit for a long time, but didn't know what to write about. I told him, don't think to write—write to think [1].
Show up to an empty page, without knowing, is totally acceptable! So is writing things down that make you feel embarrassed, confused, etc.
When I'm journaling, I often find prompts/frameworks helpful for guiding this escape.
I really like Byron Katie's framework, which she calls The Work [2]. After you notice and draw to mind a stressful thought, answer these four questions:
Q1. Is it true?
Q2. Can you absolutely know that it’s true?
Q3. How do you react, what happens, when you believe that thought?
Q4. Who would you be without that thought?
Then, invert the thought. She writes, "Turn the thought around. Is the opposite as true as or truer than the original thought?"
Derek Sivers also shares some really great questions for journaling for reframing [3].
Exactly, and I also feel like the act of "transferring" one's thoughts to visual symbols (writing, coding, diagramming) helps a lot with mental defrag and garbage collection.
Your exact thoughts have already been put to paper by
L.P.Hammet, godfather of physical organic chemistry (exact description of chemical reactions):
one might “... overlook the great difference between exact theory and approximate theory. Again, let me emphasize my great respect for approximate theory. [...] if one starts looking for an effect predicted by this kind of theory to be impossible, the odds are against a favorable outcome. Fortunately, however, the community of scientists, like that of horseplayers, contains some people who prefer to bet against the odds as well as a great many who always bet on the favorite. In science we should, I think, do all we can to encourage the man who is willing to gamble against the odds of this sort.
This does not mean that we should encourage the fool or the ignoramus who wants to play against suicidal odds, the man who wants to spend his time and usually someone else’s money looking for an effect incompatible with, let us say one of the conclusions reached by Willard Gibbs. Gibbs started from thoroughly proven generalizations, the first and second laws of thermodynamics, and reasoned from them by exact mathematical procedures, and his conclusions are the best example I know of exact theory, theory against which it is futile to struggle.”
Piaget is well known in teaching circles as a philosophical father of pedagogy. A (slightly) less know pedagogist is Vygotsky, who invented the term “Zone of Proximal Development”. The idea is that kids can learn from others and from experimentation if you can design activities where you take the skills they currently have, consider the skill you want them to acquire, and build steps between them that a child can succeed in. To develop this example: once a child can walk, they can learn to balance by being given a task which allows them to safely experiment with falling over and staying upright. Once they can balance, you can experiment with moving while balancing. Once they can move forward with balancing, they can learn stopping safely. Finally, they should be ready to learn how to pedal.
If you don’t allow them to complete all the previous steps, they may just keep failing at the next task, because they’re not yet in the “zone” to be able to acquire the next skill.
If a child can’t balance annd move forwards unaided, they won’t be able do the next thing (pedalling) even with help.
Children have different skills and capabilities and Vygotsky is not prescriptive about who needs to help, and the ZPD theory often encourages learning from peers rather than adults (parents/teachers).
I agree, but to be fair I think that point could have been made clearer in the post.
A similar but related lesson: the best way to teach something is to design a task that is just difficult enough that the learner can figure it out on their own.
When I was reading parenting books in preparation for my own kids, this is one consistent theme that kept coming up, sometimes called "scaffolding." The idea is that you provide a safe environment, design a task that is just the right level of difficulty, then let the child figure it out themselves. (For example, rather than directly holding a kid climbing up a ladder, let them climb it by themselves while you stand by to catch them just in case.) As a result, they develop more independence, self-confidence, and the lessons stick.
"Every time we teach a child something, we keep him from inventing it himself. On the other hand that which we allow him to discover by himself will remain with him visibly for the rest of his life." -- Jean Piaget
> The trick is to not care enough about your job to get hurt but not care so little that you could short-term be hurt.
It really depends on your personal psychology. After I burnt out in a demanding role that I adopted as a big part of my identity, I joined a new company vowing to not take work as seriously (I remember telling myself, "if excess effort isn't rewarded, the optimal strategy is to maximize compensation, minimize necessary effort, and eliminate excess effort").
After a few months of recovery and ruminating on why I still felt so bad (plus therapy), I learned a few things about myself:
1. I feel like garbage when I'm half-assing something at work or not giving my all -- especially when the people around me are putting in the work.
2. When I am giving my all and I feel like I'm not being recognized, I begin to lose motivation and burn out. Simple tasks become very laborious. This is a gradual, months-long process that is difficult to recognize is happening.
3. When I start to burn out, I am forced by my mind and body to half-ass things, which makes me more demotivated, which exacerbates the burnout.
Putting these insights into action, I've so far been able to keep burnout at bay by finding roles where I can give work my all, receive recognition, and be surrounded by others who are putting in similar effort. This doesn't mean blindly trusting the company or destroying my work-life balance -- I believe that "recognition for hard work" includes proactively protecting hard workers from their workaholic tendencies and giving them the flexibility to take breaks. I'm lucky to work with really great people where I frequently pass along responsibilities or take work from others to avoid over-stressing any one person and enable things like multi-week vacations. I have no idea how I will change my approach if I lose this workplace dynamic or pick up more forcing functions on my workday (e.g. having kids) in the future, but it's working pretty well for me right now.
All of this is to say: for me, the low-trust "do the bare minimum to stay employed" approach didn't actually help me get out of burnout into fulfillment -- What helped was finding a work situation where I could give my all and not feel taken advantage of. People are wired differently, so I want to caution against a one-size-fits-all approach.
My favorite quote from one of my favorite books, Anathem by Neal Stephenson (copied from GoodReads):
"Thousands of years ago, the work that people did had been broken down into jobs that were the same every day, in organizations where people were interchangeable parts. All of the story had been bled out of their lives. That was how it had to be; it was how you got a productive economy. But it would be easy to see a will at work behind this: not exactly an evil will, but a selfish will. The people who'd made the system thus were jealous, not of money and not of power but of story. If their employees came home at day's end with interesting stories to tell, it meant that something had gone wrong: a blackout, a strike, a spree killing. The Powers That Be would not suffer others to be in stories of their own unless they were fake stories that had been made up to motivate them. People who couldn't live without story had been driven into the concents or into jobs like Yul's. All others had to look somewhere outside of work for a feeling that they were part of a story, which I guessed was why Sæculars were so concerned with sports, and with religion. How else could you see yourself as part of an adventure? Something with a beginning, middle, and end in which you played a significant part? We avout had it ready-made because we were a part of this project of learning new things. Even if it didn't always move fast enough for people like Jesry, it did move. You could tell where you were and what you were doing in that story."
Some people need to have their "story", otherwise they end up miserable, regretting their wasted lives.
Yes, actually. It's not a panacea, but it's a foothold.
Nearly all my hardside cases have an airtag stuffed in one of the "Surface Mount" kits from ElevationLab. it looks like a pressure valve on the other side, and I might replace them with Security mounts if I'm really worried. Having those was a FANTASTIC way to track my cases as I left them with a (trusted) friend to be shipped along some other Very High Value gear. Being able to see what was going on (and know when it had reached its destination) was invaluable. On the way back, I could see my luggage as it slugged its way through the airport luggage handling system. It's not real-time but good enough for rough location.
A friend of mine was able to locate their stolen vehicle down to the block and then drone-find the vehicle from there, call the cops, and ended up busting an interstate chop shop in the process. The AirTag consistently got gasps of updates from passing vehicles and the neighbor's HomePod.
All this because she had hidden an airtag in the gas cap.
There are airlines that are encouraging people to put airtags/tiles/samsung trackers on their checked luggage because it helps them keep the airport handlers in check. A prime example of this is flying with guns (yes, you can fly with guns!) and how having an airtag made it EASIER to recover the firearm: https://www.youtube.com/watch?v=OHyb2amIkzo (This happened AGAIN, by the way: https://www.youtube.com/watch?v=GBngUc3rmY0 -- Yes, airlines are TERRIBLE about handling these things!)
Forming a convex surface, on the other hand, requires an at least slightly hydrophobic material and produces a positive internal pressure. This is a key difference, because condensation into a hydrophilic pore is favorable in terms of free energy, while condensing onto a hydrophobic surface is unfavorable (unless you have a supersaturated vapor).
> Theoretically speaking, you can have a material that somehow absorbs high moisture from the air but has microscale properties that promote creation of droplets then somehow these droplets are separated from the rest of the air
That "somehow" is what makes the paper's claims impossible. The water condenses spontaneously into the pore because it thereby lowers its free energy. Extruding it onto the surface is then even more unfavorable than direct condensation. Unfortunately, no passive system can achieve this feat, no matter how cleverly nanostructured, as it would go against the arrow of increasing entropy. You need an external energy source to drive that process.
1. https://en.wikipedia.org/wiki/Capillary_condensation