Hacker Newsnew | past | comments | ask | show | jobs | submit | gshulegaard's commentslogin

I don't know what your frame of reference is, but BART is above average for US public transit payment systems.

I've lived in the San Francisco Bay Area CA, Portland OR, and Philadelphia PA over the last 10 years. All of those metros have comparable public transit payment systems with auto-loading special use cards and are at various stages of adopting support for tap to pay. Honestly, within the US I can only think of NYC as having a better payment system as they were first movers on tap-to-pay adoption and it's basically fully adopted.

Internationally I think there is a larger range of experiences. I don't travel enough to properly gauge it, but I was in Paris in the last year and I don't think public transit payment was better. Still had to acquire specialized fare cards and navigate different payment systems between RATP and RER. Honestly, SF Bay comes out slightly ahead of Paris if only because Clipper is unified between various transit options (BART, Bus, Ferry, CalTrain) IMO.


> I don't know what your frame of reference is, but BART is above average for US public transit payment systems.

That doesn't change anything in the comment you're replying to. Just because it's above average for the USA, does not mean it isn't also ancient by global standards.


And yet I included a favorable international comparison as well.


FWIW Septa has had tap-to-pay working for about 2 years now (so, 2 years longer than what BART just rolled out like last week). But barely a decade ago they (septa) were still using physical tokens!

When I moved to the bay area, I thought it was so rad I could use the clipper card for vta + bart + caltrain + muni + ferry all in one.


I remember first moving to Philly and getting a SEPTA Key and thinking, "This is dumb, it's literally just a MasterCard. Why can't I use my credit card like NY?" Then a few years later they rolled out support for other bank cards and I immediately took my SEPTA Key out of my wallet.


Some time ~11 years ago, I needed to take a bus trip in SF. There was nowhere nearby to reload a Clipper card, so I was happy when I found out I could do it online. I was less happy when the web site said it would take 24-48 hours for the newly-loaded funds to be available on my Clipper card.


Still the way it is. Clipper is a multi-agency cooperative. It sucks but the fact that it works at all is a real triumph....


I find that a little hard to believe given that you can/could service Clipper cards at any Walgreens and a bunch of other retailers (e.g. pharmacies and hardware stores).


I was just North of Golden Gate Park. The nearest bus stop was 1 minute away. I wanted to take a 20 minute bus ride. The nearest Walgreens was 30 mins walk away (Geary and 42nd/43rd). The nearest street with shops was 12 mins walk away. I don't know whether any of those shops offered Clipper card top-up.


  I can only think of NYC as having a better payment system as they were
  first movers on tap-to-pay adoption and it's basically fully adopted.
Portland's TriMet had tap-to-pay well before New York.

  I was in Paris in the last year and I don't think public transit
  payment was better.
The multi-stage turnstiles at the RER stations… ugh.


When I lived in Portland you technically _could_ pay TTP, but I don't quite count it because Hop pass accrual meant if you used TriMet regularly you needed a Hop card anyway. Just out of curiosity I checked around and it looks like they extended that functionality to regular bank cards around two years ago [1]. Which is awesome as now the only reason to get a Hop pass is for people that qualify for reduced fares or are unbanked (which makes sense).

> The multi-stage turnstiles at the RER stations… ugh.

Ah yes, had one of many, "I look like the tourist I am," moments navigating those visiting the Versailles.

[1] https://www.reddit.com/r/Portland/comments/1awweix/trimet_ex...


BART now does actually have tap-to-pay, but it's very recent: https://www.kqed.org/news/12052690/bart-fares-2025-credit-ca...


It's also had phone based clipper card support for years now. Credit card open loop systems are pretty slow compared to a well implemented closed loop transit system like they have with suica in japan, but BART's clipper is probably about as slow in comparison


Clipper (nee TransLink) is a regional system, not a BART specific one. In fact BART was one of the last Clipper hold outs because they were hell bent on having their own BART purse. Time to authorize is really down to which readers you interact with. The current BART turnstiles+readers are pretty slow.


We watched this happen again in New York where OMNY was supposed to be the region-wide fare system, but the Port Authority decided not to use it, all the bus systems decided not to use it, and the MTA's railroads decided not to use it. It is a mild disaster. (Hilariously, the Port Authority runs two rail system, PATH and the JFK Airtrain. The Airtrain does take OMNY.)

Does Caltrain still count entering the BART station at Milbrae as not tapping off? That was always my favorite quirk of the Clipper system.

(For those not familiar... Caltrain is a tap on / tap off "proof of payment" system. You're charged the full fare when you tap on, and refunded what you didn't use when you tap off. BART and Caltrain share a platform at Milbrae. You can get off Caltrain and be right at the gate to get into BART by tapping your Clipper card. Well. This taps you into BART, but doesn't tap you off of Caltrain. To get your refund, you had to know this was a thing and go find a fare validator before tapping on to BART. You also end up being inside Caltrain's proof of payment required area without proof of payment while you walk along the platform from Caltrain's fare validator to BART's entry turnstile. I am probably the only person to ever care about this, but...)


  Does Caltrain still count entering the BART station at Milbrae as not tapping off?
Couldn't say. When I took Caltrain regularly I gave up on the BART/Caltrain transfer pretty quickly.


In my day, that transfer was a privately owned blue bus called the Jitney.


I can tap my credit card on any public transit system in Southern Ontario (where Toronto/Waterloo are located).

I can still use an auto-loading special use card if I want. I do that so I can have a free transfer between different transit systems during my commute.


Frame of reference is the world which is reasonable given the US status in the world.

Hong Kong, China, Taiwan, Dubai, Japan, UK. The USA is supposed to be among the top in terms of technology but infra is just garbage. The BART is pathetic. I don't know why you defend it with pride. Attack it, because if you hate it and you are vocal about it, things are more likely to change.

I'm sick of people defending something that's shit because of pride. It's garbage.


> Honestly, within the US I can only think of NYC as having a better payment system as they were first movers on tap-to-pay adoption and it's basically fully adopted.

Chicago is pretty good too. IIRC they also have tap-to-pay. In fact, I think they had it before NYC


Chicago has had tap to pay for as long as one lived here (11 years now). I think it predates me having any tap to pay credit card or phone lol.


I also think asyncio missed the mark when it comes to it's API design. There are a lot of quirks and rough edges to it that, as someone who was using `gevent` heavily before, strike me as curious and even anti-productive.


> but we know that reasoning is an emergent capability!

Do we though? There is widespread discussion and growing momentum of belief in this, but I have yet to see conclusive evidence of this. That is, in part, why the subject paper exists...it seeks to explore this question.

I think the author's bias is bleeding fairly heavily into his analysis and conclusions:

> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.

I think it's pretty obvious that the researchers are exploring whether or not LLMs exhibit evidence of _Deductive_ Reasoning [1]. The entire experiment design reflects this. Claiming that they haven't defined reasoning and therefore cannot conclude or hope to construct a viable experiment is...confusing.

The question of whether or not an LLM can take a set of base facts and compose them to solve a novel/previously unseen problem is interesting and what most people discussing emergent reasoning capabilities of "AI" are tacitly referring to (IMO). Much like you can be taught algebraic principles and use them to solve for "x" in equations you have never seen before, can an LLM do the same?

To which I find this experiment interesting enough. It presents a series of facts and then presents the LLM with tasks to see if it can use those facts in novel ways not included in the training data (something a human might reasonably deduce). To which their results and summary conclusions are relevant, interesting, and logically sound:

> CoT is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching, fundamentally bounded by the data distribution seen during training. When pushed even slightly beyond this distribution, its performance degrades significantly, exposing the superficial nature of the “reasoning” it produces.

> The ability of LLMs to produce “fluent nonsense”—plausible but logically flawed reasoning chains—can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.

That isn't to say LLMs aren't useful, just exploring it's boundaries. To use legal services as an example, using an LLM to summarize or search for relevant laws, cases, or legal precedent is something it would excel at. But don't ask an LLM to formulate a logical rebuttal to an opposing council's argument using those references.

Larger models and larger training corpuses will expand that domain and make it more difficult for individuals to discern this limit; but just because you can no longer see a limit doesn't mean there is none.

And to be clear, this doesn't diminish the value of LLMs. Even without true logical reasoning LLMs are quite powerful and useful tools.

[1] https://en.wikipedia.org/wiki/Logical_reasoning


Discerning the limits is the most important thing of all, and we seem very eager to obfuscate it for LLMs.

We so desperately want something we can sell as AGI or at least magic that the boundaries on the tools are few, far-between, and mostly based on legal needs "don't generate nudes of celebrities who can sue us" rather than grasped technical limits.

The more complex and sophisticated the query, the harder it will be to double-check and make sure you're still on the rails. So it's the responsibility of the people offering the tools to understand and define their limits before customers unknowningly push their legal-assistant LLMs into full Sovereign Citizen mode.


I feel like this started/greatly accelerated when Guido stepped down as BDFL. Python at is on a path where the essence of what made it popular (readable, well designed, productive) is being crushed under the weight of it’s popularity. The language now feels bloated and needlessly complex in areas that were previously limited, but simple.

I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident and it clicked for me that Python is not the same language I loved at the start of my career.


The problem with any project is that at some point it's essentially complete, and all we need is small maintenance to keep it going. Google used to have an algorithm that pretty much solved web search. Tinder solved dating. Spotify solved music delivery. The problem is, you're sitting with a hundred managers and a thousand engineers and all these people expect growth. So you have to keep going, even if the only direction is down, because if you don't, you'll be forced out of organization and replaced by someone who does. So you do go down. And then everyone's surprised and playing the blame game.


> I recently chased down a bug where something was accidentally made a class variable because a type hint was left off it by accident

That's the reverse situation to one I've come across - a novice accidentally wrote `a : 4` instead of `a = 4` and was surprised that `a` was still undefined. There was no error because the `4` was interpreted as a type hint.


> I feel like this started/greatly accelerated when Guido stepped down as BDFL

Same, but I don't think that's the direct cause. Guido was actually in favor of all these features (as well as Walrus, of course) - so it's not like he would have vetoed them if he were still BDFL.


It's funny because a lot of OSS suffers from neglect.

But Python and a few other popular OSS (TypeScript is another example) have the opposite problem: too much development time is spent on them, too many features are added, and the language bloats and becomes less nice to use


You could always move to a country which doesn't fluoridate their water supply.

But I am struggling to see how this has anything to do with a white paper highlighting and examining flaws in another white paper.


They don't even have to move to another country. They can move to parts of the United States that don't fluorinate or that don't have municipal water at all, or just get a gravity filter.


> fluorinate

Nitpick, fluorinated water, i.e. treating water with fluorine gas, produces hydrofluoric acid, oxygen and heat. Not tasty. Or keep your face on-y.

Fluoridated water matches up the fluorine atoms with a buddy who keeps them in check before introducing it to water. (Sort of like why you wouldn’t want to eat a chunk of sodium.)


Isn't it the case that countries that don't fluoridate only do so because their water sources already contain beneficial amounts?


i think the dissent in this thread is unqualified. Society is completely over "net-benefit" solutions. Stop perscribing 20th century one size fits all solutions. How about free flouride tablets instead of dosing everyone. Then saying "we need public policy to govern insurance rates". If this arugment saw its maxima, it would be manditory euthansia after 65. Certainly would really drop insurance rates. Btw genetics are a massive factor in oral hygine requirements, probably something your not considering. Should everyone wear the same brand/make of shoes?


Having to do work is different than getting adequate vitamins and minerals (which is all fluoride is) passively. It's no different than iodine in salt or vitamins B12, C and folic acid in cereal or vitamin D in milk.

We no longer go out and find the magic rock that we lick to ensure a good harvest or drink from the special stream that cures illnesses: we know what the human body needs to be able to do things like "grow teeth" and "not develop scurvy". Why would we go back to making people have to do a bunch of work to get access to basic nutrition? Because some weirdos want to take us back to a time when the average life expectancy in the United States was 40 years old?


The only point I put forth is that public fluoridation of water supplies doesn't infringe absolutely on an individual's right to informed consent to treatment since there is at least 1 method (moving) available that an individual can utilize to opt out. Others have pointed out that there may even be additional options available such as de-fluoridating yourself.

Did you have something on topic to contribute?

Or did you just want a soap box to voice your own opinions and I just happen to be collateral damage because you thought casting oblique aspersions about my qualifications would make you sound intelligent?


These days if you want a PostgreSQL based Data Warehouse both Citus and Timescale are extensions/PostgreSQL based databases I would consider before Redshift.

But even in the 9.4 days (~a decade ago) I was pushing Terabytes worth of analytics data daily through a manually managed Postgres cluster with a team of <=5 (so not that difficult). Since then there have been numerous improvements which make scaling beyond this level even easier (parallel query execution, better predicate push down by the query planner, and declarative partitioning to name a few). Throw something like Citus (extension) into the mix for easy access to clustering and (nearly) transparent table sharding and you can go quite far without reaching for specialized data storage solutions.


I just briefly reviewed the README docs, I believe the KlongPy is a custom language which transpiles to Python. The REPL block you are trying to interpret is the KlongPy REPL not a Python one.

Embedding KlongPy in a Python block would look more like this (also from the docs):

    from klongpy import KlongInterpreter
    import numpy as np

    data = np.array([1, 2, 3, 4, 5])
    klong = KlongInterpreter()
    # make the data NumPy array available to KlongPy code by passing it into the interpreter
    # we are creating a symbol in KlongPy called 'data' and assigning the external NumPy array value
    klong['data'] = data
    # define the average function in KlongPY
    klong('avg::{(+/x)%#x}')
    # call the average function with the external data and return the result.
    r = klong('avg(data)')
    print(r) # expected value: 3
Note the calls to "klong('<some-str-of-klong-syntax')".


Nope, __slots__ exist explicitly as an alternative to __dict__:

https://wiki.python.org/moin/UsingSlots

Whether or not the performance matters...well that's somewhat subjective since Python has a fairly high performance floor which makes performance concerns a bit of a, "Why are you doing it in Python?" question rather than a, "How do I do this faster in Python?" most of the time. That said it _is_ more memory efficient and faster on attribute lookup.

https://medium.com/@stephenjayakar/a-quick-dive-into-pythons...

Anecdotally, I have used Slotted Objects to buy performance headroom before to delay/postpone a component rewrite.


Yes I know the slotted attribute is not in a __dict__, which definitely helps memory usage. But my point is that if the parent structure is itself in a dict, that access will swamp the L1 cache miss in terms of latency. Even the interpretation overhead (and likely cache thrashing) will eliminate L1 cache speedups.

And yes __slots__ improve perf, but it’s about avoiding the __dict__ access, which hits really generic hashing code and then memory probing more than it is about L1 cache

Where __slots__ are most useful (and IIRC what they were designed for) is when you have a lot of tiny objects and memory usage can shrink significantly as a result. That could be the difference between having to spill to disk or keeping the workload in memory. E.g., Openpyxl does with a spreadsheet model, where there could be tons of cell references floating around


Let me try again, from the first link I shared:

> The __slots__ declaration allows us to explicitly declare data members, causes Python to reserve space for them in memory, and prevents the creation of __dict__ and __weakref__ attributes. It also prevents the creation of any variables that aren't declared in __slots__.

Emphasis:

> prevents the creation of __dict__ and __weakref__ attributes. It also prevents the creation of any variables that aren't declared in __slots__.

In short, if you create a slotted object with __slots__ it sends you down a fairly orthogonal object lifecycle path which does not create or use __dict__ in anyway. This obviously has drawbacks/limitations like not being able to add new members to the object like a normal Python object.

From the second article:

> However, if you have __slots__, the descriptor is cached (which contains an offset to directly access the PyObjectwithout doing dictionary lookup). In PyMember_GetOne, it uses the descriptor offset to jump directly where the pointer to the object is stored in memory. This will improve cache coherency slightly, as the pointers to objects are stored in 8 byte chunks right next to each other (I’m using a 64-bit version of Python 3.7.1). However, it’s still a PyObject pointer, which means that it could be stored anywhere in memory. Files: ceval.c, object.c, descrobject.c

Which I think addresses your concern about parent dict access...but I could also be misunderstanding your point.


I was in the same boat, but the 30 minute fast charging now makes me think that this actually might work. Sleep with it on, wake up, pop it on the charger while you get ready, bam basically a full charge by the time you leave the house.

I don't wear an Apple watch at night (and I don't plan to upgrade to this one) but for the first time I think I could see how this might work for someone.


I have worked on teams that have both sharded and partitioned PostgreSQL ourselves (somewhat like Figma) (Postgres 9.4-ish time frame) as well as those that have utilized Citus. I am a strong proponent of Citus and point colleagues in that direction frequently, but depending on how long ago Figma was considering this path I will say that there were some very interesting limitations to Citus not that long ago.

For example, it was only 2 years ago that Citus allowed the joining of data in "local" tables and data retrieved from distributed tables (https://www.citusdata.com/updates/v11-0). In this major update as well, Citus enabled _any_ node to handle queries, previously all queries (whether or not it was modifying data) had to go through the "coordinator" node in your cluster. This could turn into a pretty significant bottleneck which had ramifications for your cluster administration and choices made about how to shape your data (what goes into local tables, reference tables, or distributed tables).

Again, huge fan of Citus, but it's not a magic bullet that makes it so you no longer have to think about scale when using Postgres. It makes it _much_ easier and adds some killer features that push complexity down the stack such that it is _almost_ completely abstracted from application logic. But you still have be cognizant of it, sometimes even altering your data model to accommodate.


You also benefit from the tailwind of the CitusData team making continued improvement to the extension, whereas an in-house system depends on your company's ability to hire and retain people to maintain + improve the in-house system.

It's hard to account for the value of benefits that have yet to accrue, but this kind of analysis, even if you pretty heavily-discount that future value, tilts the ROI in favor of solutions like Citus, IMO. Especially if your time horizon is 5+ or 10+ years out.

Like you said, if they made this decision 3ish years ago, you would have had to be pretty trusting on that future value. A choice, made today, hinges less on that variable.


Huh, I would have thought the opposite. Companies at Figma size are easily able to hire talent to maintain a core part of their engineering stack. On the other hand, they retain no control of Citus decision making. Those tailwinds could easily have been headwinds if they went in a direction that did not suit Figma.


I think this is true for things higher up the "stack", but doesn't necessarily apply to tech like Postgres [and Citus, IMO].

The line separating "build in-house" vs "use OSS" exists, and it's at a different layer of the stack in every company. IMO, for most companies in 2024, the line puts Citus on the same side as Postgres.

FWIW, I would have assumed that Citus would be on the other end of the line, until I had to look into Citus for work for a similar reason that Figma did. You can pick and choose among the orthogonal ideas they implement that most cleanly apply to the present stage of your business, and I would've chosen to build things the same way they did (TBH, Figma's choices superficially appear to be 1:1 to Citus's choices).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: