Hacker Newsnew | past | comments | ask | show | jobs | submit | gibsonf1's commentslogin

The 18,000 people who lost their jobs may disagree.


California created nearly one in five of the nation’s new jobs - https://www.gov.ca.gov/2024/08/16/california-created-nearly-... - August 16th, 2024

> California’s job expansion has continued into its 51st month, with Governor Gavin Newsom announcing that the state created 21,100 new jobs in July. Fast food jobs also continued to rise, exceeding 750,000 jobs for the first time in California history.

> “Our steady, consistent job growth in recent months highlights the strength of California’s economy – still the 5th largest in the entire world. Just this year, the state has created 126,500 jobs – solid growth by any measure.”

This is slightly out of date; California is now the world’s fourth largest economy as of April 2025, passing Japan. I assert the data shows the state does not have a job creation issue.

https://www.gov.ca.gov/2025/04/23/california-is-now-the-4th-...


These 18,000 are most likely employed somewhere else at 20-25% wage increase. Note that a different study didn't see a rise in unemployment: https://www.nbcbayarea.com/investigations/california-minimum... which means that these people affected actually got a better living standard.


This group is well known for bias, over and over through the years. Nothing they report should be taken at face value.

"A considerable amount of financial support for the Center comes from labor unions: According to federal reports, over the last 15 years it has received nearly $1.2 million in labor funding."

"The IRLE’s highest-profile researcher is Michael Reich, who co-chairs its Center on Wage and Employment Dynamics. Reich made a name for himself at a young age co-founding the Union for Radical Political Economics, with the stated goal of supporting “public ownership of production and a government-planned economy.”"

https://us.fundsforngos.org/news/nonprofit-accuses-uc-berkel... https://epionline.org/release/biased-uc-berkeley-research-te... https://epionline.org/release/biased-uc-berkeley-research-te...


And the nation is currently ruled by somebody who orders rewriting past papers on climate science:

https://phys.org/news/2025-08-rewrite-national-climate.html

So why are we taking at face value that study from nber which is increasingly staffed by Trump loyalists?


Our[1] solution to that is to use a hierarchical semantic systems approach such that you can give access to a subsystem or entire biological systems.

[1] https://graphmetrix.com/trinpod-server


Actually, here is an example enterprise internet scale solid solid server: https://graphmetrix.com/trinpod-server


Thats a terrible ontology (the relations) - needs to be much lower level to understand anything important.


The current demo shows a simplified view, but the tool can handle much more granular relationships. I have some glioblastoma and pancreatic cancer networks with protein-protein interactions, phosphorylation events, and pathway cross-talk that show the lower-level detail. The challenge is balancing accessibility with scientific rigor.


hey you made it to the front page but looks like its down


It's back up, thanks for checking it out!


Does it hallucinate with the LLM being used?


Sometimes. I just fed the huggingface demo an image containing some rather improbable details [1] and it OCRed "Page 1000000000000" with one extra trailing zero.

Honestly I was expecting the opposite - a repetition penalty to kick in having repeated zero too many times, resulting in too few zeros - but apparently not. So you might want to steer clear of this model if your document has a trillion pages.

Other than that, it did a solid job - I've certainly seen worse attempts to OCR a table.

[1] https://imgur.com/a/8rJeHf8


The base model is Qwen2.5-VL-3B and the announcement says a limitation is "Model can suffer from hallucination"


Seems a bit scary that the "source" text from the pdfs could actually be hallucinated.


Given that input is image and not raw pdf, its not completely unexpected


We[1] Create "Units of Thought" from PDF's and then work with those for further discovery where a "Unit of Thought" is any paragraph, title, note heading - something that stands on its own semantically. We then create a hierarchy of objects from that pdf in the database for search and conceptual search - all at scale.

[1] https://graphmetrix.com/trinpod-server https://trinapp.com


I'm tempted to try it. My use case right now is a set of documents which are annual financial and statutory disclosures of a large institution. Every year they are formatted / organized slightly differently which makes it enormously tedious to manually find and compare the same basic section from one year to another, but they are consistent enough to recognize analogous sections from different years due to often reusing verbatim quotes or highly specific key words each time.

What I really want to do is take all these docs and just reorder all the content such that I can look at page n (or section whatever) scrolling down and compare it between different years by scrolling horizontally. Ideally with changes from one year to the next highlighted.

Can your product do this?


Probably without too much difficulty. If you have a sample to confirm, that would be great. frederick @ graphmetrix . com


Yep, they completely missed the boat. They tried to use concepts without actually modeling concepts, making a huge mess of contradicting statements which actually didn't model the world. Using a word in a statement does not a concept make!


The error with that is that human reasoning is not mathematical. Math is just one of the many tools of reason.


Intransitive preferences is well known to experimental economists, but a hard pill to swallow for many, as it destroys a lot of algorithms (which depends on that) and require more robust tools like https://en.wikipedia.org/wiki/Paraconsistent_logic

> just one of the many tools of reason.

Read https://en.wikipedia.org/wiki/Preference_(economics)#Transit... then read https://pmc.ncbi.nlm.nih.gov/articles/PMC7058914/ and you will see there's a lot of data suggesting that indeed, it's just one of the many tools!

I think it's similar to how many dislike the non-deterministic output of LLM: when you use statistical tools, a non-deterministic output is a VERY nice feature to explore conceptual spaces with abductive reasoning: https://en.wikipedia.org/wiki/Abductive_reasoning

It's a tool I was using at a previous company, mixing LLMs, statistics and formal tools. I'm surprised there aren't more startups mixing LLM with z3 or even just prolog.


Thanks for the links, the "tradeoff" aspect of paraconsistent logic is interesting. I think one way to achieve consensus with your debate partner might be to consider that the language rep is "just" a nondeterministic decompression of "the facts". I'm primed to agree with you but

https://news.ycombinator.com/item?id=41892090

(It's very common, esp. with educationally traumatized Americans, e.g., to identify Math with "calculation"/"approved tools" and not "the concepts")

"No amount of calculation will model conceptual thinking" <- sounds more reasonable?? (You said you were ok with nondeterministic outputs? :)

Sorry to come across as patronizing


if conceptual thinking is manipulating abstract concepts after having been given concrete particulars, I'd say it relies heavily upon projection, which, as generalised "K" (from SKI), sounds awfully like calculation.


And this is why I think gibson1 is wrong: we can argue about which projections or systems of logic should be used, concepts are still "calculations".


Here is why I think Gibson could in principle still be right (without necessarily summoning religious feelings)

[if we disregard that he said "concepts are key" -- though we can be yet more charitable and assume that he doesn't accept (median) human-level intelligence as the final boss]

  Para-doxxing ">" Under-standing
(I haven't thought this through, just vibe-calculating, as it were, having pondered the necessity of concrete particulars for a split-second)

(More on that "sophistiKated" aspect of "projeKtion": turns out not to be as idiosynKratic as I'd presumed, but I traded bandwidth for immediacy here, so I'll let GP explain why that's interesting, if he indeed finds it is :)

Wolfram (selfstyled heir to Leibniz/Galois) seems to be serving himself a fronthanded compliment:

https://writings.stephenwolfram.com/2020/12/combinators-a-ce...

>What I called a “projection” then is what we’d call a function now; a “filter” is what we’d now call an argument )


Did you read the slide? It doesn't make the argument you are responding to, you just seem to have been prompted by "Math".


A more generous take on the previous post is that the dominant paradigm of Math (consistent logic, which depends on many things like transitive preference) is wrong, and that another type of Math could work.

If you look at the slide, the subtree of correct answers exists, what's missing is just a way to make them more prevalent instead of less.

Personally, I think LeCun is just leaping to the wrong conclusion because he's sticking to the wrong tools for the job.


My point is no type of math will work to model reason. Math is one of the many tools of reason, it is not the basis for reason. This is a very common error.


> My point is no type of math will work to model reason

Then I disagree with you.


Exactly, we disagree and you are not alone in thinking this. You can use reason to do math, but your can't model reason with math.


I'm ignorantly curious of what type of math will work in your view. Genuine question, I just want to be educated.


There is no type of math that can model conceptual reasoning. You can use conceptual reasoning, however, to do math.


I think I know what math is, though I'm not sure. Logical systems of axioms and inference rules?

But I'm even less sure what conceptual reasoning is.


A less generous take would be that humans are also stoichastic parrots that can't help themselves but say something when they see a trigger word like math, Trump, transgender, or abortion.


Under the hood would reveal that we think in terms of concepts, attributes, space-time experience etc, and that language is just a means of serializing that conceptual understanding. We do not think in language.


We have had a great experience using Common Lisp [1] for our causal space-time systems digital twin [2]

[1] http://sbcl.org/

[2] https://graphmetrix.com/trinpod-server


I so envy people who manage to find interesting Common Lisp work, it's like we live in different dimensions.


There are many independent consultants working in Lisp.

Yes, it is rare.


Requires open minded middle management and that is rare.


or the CEO of Franz, Inc. as an advisor, it seems.


Also helps that the CEO of the company does Common Lisp Dev.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: